AIR Chief on Cat Models: ‘What’s All The Fuss About?’

By | August 19, 2011

When it comes to revised hurricane models, AIR Worldwide CEO Ming Lee has one basic question, “What really is all this big fuss about?”

Lee, whose Boston company launched the catastrophe modeling industry in 1987, is referring to the fuss largely caused by a competitor, California-based RMS, whose revisions contained in its hurricane model, version 11, have indicated a need for sizable rate changes in some areas. The new RMS model includes a number of significant changes, but none has caused more of a stir than its increase in loss projections in inland areas in Florida and elsewhere, along with its indication that coastal exposure has slightly decreased.

Some in the industry have predicted that the RMS model 11 will lead reinsurers and insurers to revise rates and underwriting. Some have predicted that the new model, along with recent catastrophes around the globe, could trigger an end to the prolonged soft market. Still others have suggested that the RMS release has shaken insurer confidence in the use of catastrophe models. Standard & Poor’s Ratings Services placed its ratings on 17 natural peril catastrophe bonds on CreditWatch with negative implications because of the new RMS model.

“AIR’s view of U.S. hurricane risk has not changed, and therefore, we’re not quite sure what all the fuss is about surrounding the other model change,” Lee told Insurance Journal.

“And not only is it because our view of the risk hasn’t changed, but also because 20 of the top 25 residential insurers in Florida use the AIR hurricane model to manage their hurricane risk, which once again begs to question, ‘What really is all this big fuss about?'”

According to Lee, insurance companies should both employ common sense and ask a lot of questions about any hurricane model that changes dramatically from one version to the next.

“As a currency, just like the U.S. currency or other monetary currencies, the value of the currency is really based on the confidence that you all have in those results They need to understand what changes have taken place in the science,” he said. “And they should demand that the model components obey basic physical expectations of the underlying hazard, that those components be independently evaluated.”

For example, he suggests a number of questions. What is the relative hurricane frequency in Florida versus New England? What’s the relative frequency of category three, four and five storms, relative to category one and two storms? How has a model been peer reviewed and validated?

“You would assume that with all models out there, you’ve got common sense understanding of the risk. But recent evidence demonstrates that you might not be able to assume this whole model and you really need to be able to perform some of the due diligence on your own and really ask many of the basic questions about the model,” said Lee.

In fact, Lee maintains, if a model changes dramatically, that should be a red flag to users.

“[I]f you’ve got the model right and the model is robust, then you should not be seeing dramatic changes without very good explanations,” he said.

It’s not that AIR never changes its hurricane model. In fact it does, regularly.

But, according to Lee, AIR takes an incremental approach.

“Our practice in the past has been to regularly update the model to incorporate the latest science and engineering research, rather than to unleash a mass update once every five years or so. A part of that learning comes from our practice of performing real time loss estimates after working at the science of the problem,” he said.

Thus AIR recently modified its calculations on wind speeds.

“We had a lot more, good quality wind observations, both from hurricane-hunter aircraft at the flight level, as well as at the surface level. We had wind speed data at the surface level that was previously unavailable. So that gave us a better understanding of the structure of hurricanes and helped us to better estimate local wind speeds,” he said.

AIR has also updated what Lee calls the “vulnerability components” in its model, based on multi-year, peer reviewed study of the evolution and the enforcement of building codes at the state and local levels, “and what impact those building codes have on vulnerability.”

Lee said AIR applies data from a “core, robust foundation” and its work goes through a peer review process.

“We have actually quite a large number of reviewers outside of AIR, in academia, as well as in government, the weather service. We do a hazard component. It has a component that’s always the vulnerability component of the model,” he said.

According to the AIR leader, the validation process is what distinguishes competitors in the modeling business.

“[E]very company starts with the same raw data relative to .. how many hurricanes have there been, where the hurricanes made may have been, how all that data fits in. And that fundamental science data there is the same, even though we will end up with different results,” he said. “Now, in part that’s because there is some difference in scientific judgments in the models of components. But in part it’s just because the validation process is different in different companies. I think that that is… one of the differences in the approach that we have.”

He said AIR provides real time loss estimates of events as a service to its clients but each one is also a test of its model and part of a process that helps it incrementally improve its model.

“[W]hen we do the real time estimates, it’s an opportunity for us to test the model, and it’s an opportunity for us to learn and improve on what we were doing, which is what we have done, especially when you break those losses not only looking at the industry, but you look at it by region, you look at it by line of business or by company, and the difference between the models and the actual losses actually will teach you a lot about hurricane modeling, or, for that fact, if this is for a different peril, about the modeling for another peril.”

The RMS revision in particular has drawn attention to how models cope with inland penetration of storm tracks.

AIR’s hurricane model in the U.S. is based on historical data dating back to 1900. Lee said that since 1900, there have been 65 documented events that caused significant inland losses. So, following its incremental approach, AIR has accounted for inland losses for many years in its model. That has led AIR over the years to add inland states subject to storm losses including Arkansas, Kentucky, Ohio, Oklahoma and Tennessee, in addition to the coastal states. In 2010, it added the states of Illinois, Indiana and Missouri, bringing the total number of states modeled in the U.S. in the AIR U.S. hurricane model to 29 states.

“So this is nothing new. We’ve been doing it since inception, because this is just the nature of hurricanes, and what hurricanes have done in the United States and what we have experienced,” Lee said.

Lee said some carriers and rating agencies may be misapplying cat models.

For one, carriers should not look at any specific number out of a model and then manage to it. “One needs to embrace the fact that there’s uncertainty in nature. There is, by virtue of the uncertainty in the nature and by virtue of the fact that we don’t know everything, that’s the reason why we build models, models are still models. So you need to view model results through the lens of uncertainty,” he said.

Carriers should also look at ranges of outcomes, and test their decisions based on some what ifs, “to take into account the uncertainty in the model output, but really, uncertainty in terms of what really is going to happen in nature.”

He offers similar advice to rating agencies.

“Part of my comment would really be the same to the rating agencies as to the companies, which is the rating agencies also should not be asking for and looking at point estimates, what is the 1 in 100 or what is the 1 in 250,” he said.

“And in fact, many rating agencies do that. I think they should get away from that practice and instead also embrace the fact that there is uncertainty in the models, in the model output, and there is uncertainty in terms of what Mother Nature is going to do. So they ought to allow for and have a better understanding of uncertainty in the questions that they ask and how they interpret the responses from companies to the questionnaires that they have.”

Lee does not think rating agencies are allowing for enough uncertainty.

“I think they’re being a little bit too deterministic and very specific in the numbers. They both — company executives and rating agencies — gravitate to some specific numbers, and they call them PMLs, [probable maximum loss] and that is actually poor practice. If the rating agencies would embrace uncertainty more, I think company executives would follow suit,” he said.

At the same time, he said, carrier executives should be aware that rating agencies do not require them to utilize any particular cat model and also do not mind if carriers switch or emphasize one model more than others.

“They don’t require any particular brand of model. And for those companies who change or are thinking of changing their reliance on one model over one that they have been using, if they want to switch, there … isn’t necessarily a barrier that the rating agencies are putting up there. But some companies kind of perceive that.”

Lee said this misperception is pretty widespread “but the reality of it is that if you have a good explanation and you can demonstrate that you really have taken ownership of the catastrophe risk management process and you understand what the models are doing and what the model output means, then it’s perfectly OK to be weighting one model over another model.”

Listen to the 3-part interview with AIR’s Ming Lee on Insurance Journal TV.
Part 1: ‘What’s All the Fuss?”
Part 2: What Carriers, Rating Agencies Should Ask About Cat Models
Part 3: The Expanding Use of Catastrophe Models

Topics Florida Catastrophe Carriers USA Agencies Hurricane

Was this article valuable?

Here are more articles you may enjoy.