Members of Aon Benfield’s Analytics Team Explain Cat Models

By | October 18, 2013

“Our remit at Aon Benfield Analytics is really to do the risk quantification part of reinsurance transactions,” said John Moore, the head of International Analytics operations. He’s responsible for overseeing the collection of massive amounts of data, the construction of catastrophe models from that data, and the integration of those models in relation to the risks written by Aon’s clients.

After speaking with Moore, we were fortunate enough to also talk with two members of his team. Adam Podlaha is the Head of Impact Forecasting International; equipped with a PhD in flood model development from Charles University in Prague, he oversees the translation of the data Aon collects into useable models, a lengthy process that now takes two years or more.

Goran Trendafiloski is the head of earthquake model development at Impact Forecasting. He’s in charge of processing all of the data that pertains to areas of the world that are exposed to quake risk.

While property catastrophe coverage is the most familiar and most used sector for cat models, Moore explained that it’s not the only area he and his team work on. “We do reinsurance modeling for all types of other lines,” including casualty and other long tail lines.

As a result the team has people with a number of different skills. “Our team is very eclectic, but it’s essentially technical; so it’s everything from actuaries, to engineers [Moore’s background is in engineering], scientists, seismologists, people with a hydrology background.” Altogether in Aon Benfield worldwide there are approximately 530 working in analytics.

That number reflects the growth that has occurred due to the need to construct more and better models – all of which has occurred within “the last five or ten years.” How important are they? In addition to property cat, Moore’s team also comes up with models for specialty lines, including “marine, retrocession, non-marine, aviation, casualty and long tail lines,” he said. “We’ve looked across the business, and we couldn’t find anywhere that analytics, as a team isn’t actually playing a part.”

In order to do so effectively analytics relies on massive data inputs. When reliable data isn’t available, or isn’t sufficiently detailed “sometimes that makes it harder for us to actually do our work.” Despite the difficulties there are now very few areas where analytics aren’t being used in some form. It is essential for clients to “understand the value of what they risk transfer, their strategy for risk transfer, and how you price it.”

Moore sees growth in selected emerging markets – notably Asia, Southeast Asia and “parts of Latin America. Without a doubt that’s where we will see the pace of change in terms of insurance and reinsurance.” These are regions that are exposed to “a lot of perils,” and for that reason Aon Benfield is working on improving its understanding of the risks involved “to strive to make the market more possible to be expanding in those areas.”

He also feels that additional capital, including alternative capital, is a positive development as “we can expand what influence insurance and reinsurance has in the world, and I think that’s a massive opportunity and a big challenge, because I think – whatever form of capital – people are hungry and demand the knowledge about the risk, the data, the portfolio – where things actually are, what are they, and just understanding better the risk(s).” His team is an essential part of that effort, as “we need to expand our products, and we need to make sure that we’re relevant, and then the capital will stay; I’m sure.”

Some of the more challenging areas for the team are emerging risks, such as those posed by cyber risks, contingent business interruption (cbi) and electro-magnetic storms. These are all areas where Aon Benfield Analytics is “trying to make people aware that they are out there; they do happen.” The next step is to figure out how best to effectuate risk transfers that cover these newer perils. “Our job is to make people aware, and then to do what we can to make a conversation more possible towards risk transfer.”

He explained that doing basic research, finding out what has actually happened in the past is a significant part of the analytics team’s efforts. To do so they look at as many areas of research that may be available from academics and universities. For instance, there has been a lot of research done on solar flares, which are ultimately the cause of electro-magnetic storms.

Moore also pointed out that people tend to view things from a shorter term perspective than they should. Just because an event hasn’t happened recently, doesn’t mean that it won’t eventually; therefore making people aware of these types of risks is, part of his job. “Longer tem events often show us both frequency and severity,” he said. The task then becomes to find the causes of the event, and to pinpoint its consequences in order to eventually put a price on what coverage the re/insurance industry may be able to provide.

Climate change is “a huge area” of risk the analytics team is working on. This requires consultation with university centers – Moore works closely with the Tyndall Center at the University of East Anglia – where issues of climate change are presented to governments. “The way I see it, we need to work and understand what the implications are – as much as we can. The analysis includes bringing in climate models to gauge how much the world could be affected by rising temperatures.

Models are now an integral part of the reinsurance market, to the extent that Moore said “no risks are written without using them.” As the Head of Impact Forecasting International, Adam Podlaha is charged with constructing those models. “We’re developing models for earthquakes, wind storms, floods, but also human risk modeling, so terrorism and most recently riots in Indonesia.” He also does models for life insurance.

Podlaha’s initial focus is on gathering as much data as possible. As an example, for floods in the Czech Republic, “you get data from local sources,” he said. The same procedure is used for other models.

The basic data used to build the models from local sources is mostly historical, but it is the first step. “When the model is built, we go to the client’s data to run through the models to give us the results,” he explained; “so it’s very similar to how the catastrophe [modeling] firms will be working.” He added that claims data is also “absolutely necessary,” in order to “calibrate and validate the damage functions, which are part of the catastrophic models.”

Podlaha looked back ten years, when he started with Aon Benfield, to a time “when there were very few models, so we started developing models for territories where no other models existed.” First on the list were Central and Eastern European floods, for which at that time “no models existed.” Aon Benfield has been releasing models to cover this territory since 2002.

Now they are working on other territories for which models haven’t been produced – notably flood models for Thailand. “After the floods in 2011 multiple companies released a Thai flood model, to take care about the risk, and to see whether it was insurable, or not.” Aon Benfield released such a model in 2012.

After the events in Chile and Japan the loss estimates from tsunamis rose dramatically. As a result models are being constructed and released in conjunction with earthquake models, as they are part of the “cause of the same events.” Aon Benfield plans to release such a model for Chile at the end of the year. “You need a big event to move the industry forward,” Podlaha said, “and that’s exactly what happened with the tsunamis.”

For models of “man-made” perils such as terrorism, Podlaha explained that the data changes “much more often.” For example if there’s a change of government, “the definition of what the losses will be” also changes. The most important thing in constructing this type of model is to use data that is applicable to conditions over a period of time.

He added that it “isn’t as complex as it sounds.” The first step is identifying the potential “targets,” such as government buildings. “Then you define which |type of] attacks can happen on those targets.” These will range from a lone gunman to a massive car bomb and, depending on what seems the most likely – and are likely to occur more frequently – to construct a model.

“These are probabilistic models,” he said, but most clients also want a “deterministic” analysis as well- a “what if? scenario; i.e. given a specific type of attack on a specific site, what are the losses going to be? In many cases the models use actual events to explain what, if the same event happened, would the probable damages from it be?

Podlaha cited the floods in Toronto as an example, explaining that even before they occurred, Aon’s clients were asking what the probable losses would be. Using satellite images, which were added to the platform, “we could predict how big the losses would be.”

To calculate those losses it’s necessary to know the property values in the affected area. “You need to know where it is, so you know the postal code or street address. You need to know what the value is, which can come from an insurance policy, and you need to know what type of construction it is” – commercial, two story, one story, brick, reinforced concrete, etc. Much of that data is obtained from the primary insurers, but, if it can’t be, “we go to a local statistical office or census bureau, and collect the data.”

Podlaha’s latest release is a European windstorm model with several special features. He mentioned two: it analyzes “clustering,” i.e. when a number of events happen in one year, by using global climate models. Secondly the model utilizes a method to calculate “damage functions.” They are indicators of the amount of damage that will occur given, for instance, a certain wind speed. These are calculated using the actual clients’ claims information.

Although neither John Moore nor Adam Podlaha said so in exact words, what they have done, and are doing, has changed the face of the re/insurance industry. It’s now not only possible, but absolutely necessary to have a model of the risk that’s being underwritten before a loss happens. The more sophisticated these models are the better insight clients, insurers and reinsurers have of those risks.

One area, however, that, although it can be modeled, can’t really be predicted is earthquakes. That’s Goran Trendafiloski’s specialty as the head of earthquake model development at Impact Forecasting – “impact” being a particularly apt description. “Earthquakes are terrifying,” he said,” for one simple reason. We don’t know when they can happen; their severity is very hard to understand and to handle, and the destruction and the losses they are making vary; they can be localized or widespread.”

In addition to disrupting the lives of entire communities, they also destroy businesses, interrupt their work and create “post loss amplification.” All of which, Trendafiloski said,” is becoming an emergent need to model and understand.” It’s no longer enough to only know “the potential primary loss, but also all of this chain of secondary losses.”

These include fires, flooding from burst water mains, landslides and, perhaps the most devastating – tsunami waves, triggered by earthquakes, as the ones in Japan and Chile. “All of these [secondary perils] contribute to the overall losses.”

Although it’s still impossible to predict when or where an earthquake may strike, Aon Benfield is working on trying to assess possibilities. “In our models in fact we are trying to catch up all possibilities, because we are building probabilistic models.” This requires analyzing different magnitudes of an earthquake relative to its location.

That location could be anywhere in the world, which is why the analytics team works internationally. However, there are “zones that are more physically active – California, Japan, Asia.” As a result the models need to analyze the structures at risk, which change over the years. Trendafiloski and the earthquake team therefore work with clients to “understand what are the building properties in their respective portfolios, because that is very important to know and to get a reliable loss estimate.

“In our latest models we are trying to get the highest possible spatial [perspective], and also [the best possible] resolution,” he continued. This requires ascertaining the “period of construction, the position,” and to a certain extent the artistic design of the structures. The more information that can be part of the model, the more accurate it will be.

As a comparison with the Sichuan earthquake in Chile and the earthquake in Chile shows, building codes and their enforcement can make a significant difference in how great the losses from any given earthquake are going to be. “Building codes help us very much to incorporate new damage curves and new building loss curves into our model; that’s very important,” Trendafiloski said.

As the earthquake in Italy showed, older buildings are especially vulnerable to earthquakes, as they weren’t designed to resist them. Impact Forecasting models “incorporate not just regional specific [data], but also country specific [data]. We cannot simply translate existing solutions from one region to another region. It’s quite different, and we don’t work like that.”

Over the years constructing models has grown immensely more complicated, due to the immense amounts of data that must now be studied to produce ever more accurate models – demanded by companies and primary carriers – and the emergence of alternative capital to fund reinsurance vehicles. This doesn’t mean the people at Aon Benfield’s International Analytics operation won’t keep trying, as the need for them grows. “Difficult, yes; impossible, no,” said Moore.

Topics Catastrophe Profit Loss Flood Reinsurance Market Aon Japan

Was this article valuable?

Here are more articles you may enjoy.