With Smarter, Joined-Up Pricing, There’s No Such Thing as a Bad Risk: Viewpoint

By Mark Eastham | August 14, 2019

It’s a well-trodden truth in our industry that not all risks are created equal, and for insurers, that some risks are riskier than others.

However, new technologies mean new avenues of innovation for insurers to explore, providing opportunities to drastically improve risk pricing around more complex perils, which ought to be a huge priority for many.

As the pace of change continues to accelerate, only those insurers that take a more thoughtful, holistic approach to innovation around pricing risks – a three-pronged approach based on quality data, machine-learning, and effective deployment – can be confident that they will make the strides required to remain competitive in the long run.

Quality Data

The first thing an aspiring innovator should consider is data. Of course, the more data an organization holds, the better its chances of generating new insights, more accurately pricing risks and, ultimately, gaining a competitive edge. It’s this thinking that sees many insurers investing in new and interesting tools to analyze non-traditional data, which is second- or third-party data that some insurers acquire from data wholesalers that gives them additional information about their customers.

Machine learning offers insurers the opportunity to interrogate larger datasets more quickly and to identify emergent trends or patterns without the need for much human supervision.

However, the sudden availability of reams of data has given rise to the misconception that all data is equally valuable, and that it’s simply a matter of getting ahold of as much as you can. In reality, the ratio of return on investment (in time and attention, if not money) soon hits a brick wall. Therefore, data selection is key.

So, while looking at large volumes of data can be worthwhile, depending on the situation, getting more and more data for the sake of it should not be a strategy. On the contrary, most of the largest gains will involve applying newer, sophisticated modeling techniques to existing, well-tested predictors of the likelihood of claims.

Of course, it’s still imperative to make sure this data is of the highest quality. Insurers who sell direct to the consumer have the advantage of determining their own question sets and can therefore collect the granular and precise data they need to accurately model risk. Best of all, it means that datasets are continuously updated, and risk calculations can change accordingly. If intermediaries are used, they decide what data is collected from the customer as they set the question sets.

Machine Learning

Alongside this, insurers should also consider how they analyze their data. Most risk models are calculated using general linear models (GLMs). But as increased volumes of sophisticated data come into play, GLMs show their limitations. There is a visible ceiling to the number of interactions that a GLM can spot and a human is needed to identify these interactions and rate them correctly.

Mark Eastham

On the other hand, machine learning offers insurers the opportunity to interrogate larger datasets more quickly and to identify emergent trends or patterns without the need for much human supervision. These datasets can come from both insurers’ internal data, and external data sources such as open-source and paid-for datasets.

Even better, the benefits brought by machine learning increase dramatically the more data an insurer has. There is little point in applying machine learning to small and simple datasets where a GLM would suffice. But those insurers that apply cutting-edge algorithms to belt-busting datasets, unlock a distinct advantage over the competition. Not only do they have more data from which to uncover insights, but they derive greater value from each datapoint too.

Effective Deployment

However, even the best open-source machine learning algorithms, when applied to the richest datasets, mean nothing if insurers can’t quickly deploy their models into a live trading environment, with the highest level of granularity.

Ultimately, any innovation around risk pricing requires insurers to get their algorithms out to consumers, on a machine learning platform that can make decisions in real-time to quickly price risks and make a call on the level of cover that can be offered. These algorithms do this by breaking down a risk and modeling each element by using whatever source is best. There is no longer one tool to rule them all.

Yet this is easier said than done. Many large insurers are wedded to legacy platforms that are difficult to get away from. And smaller firms can often deploy very effectively in their narrow niches, but lack the expertise to do so in more complex product areas.

Start Small and Scale

Very few insurers today are making full, joined-up use of these three strategies, nor are many in the position to immediately do so. New, agile and innovative insurtechs are making waves with machine learning platforms that enable fantastic analysis they can take to the consumer at speed. But without the masses of historical claims data that traditional insurers have internally at their disposal, it is very difficult for them to price risks accurately. It is unlikely that many have the business appetite to incur the level of cost required to learn from their own claims experience over time.

On the other end of the spectrum, larger insurers have the data, but transitioning to a new platform is a huge task, carrying commensurate costs and uncertainties. It cannot be done overnight. Some insurers are starting to take a much more nimble approach through incubator models, with the idea of testing and developing the approaches aside from the core business in preparation for wider integration down the line. However, this carries the downside of creating another disconnected silo that will need to be brought back in at a later stage – it simply delays part of the problem.

Like so many golden opportunities in business, the best approach is probably one of the trickiest to achieve– namely the creation of agile, empowered, cross-departmental working groups, incorporating data scientists, underwriters, compliance, and IT security, all working collectively towards shared objectives.

Nonetheless, this is a journey that all insurers will need to go on – particularly those working in more complex markets like home and contents insurance, where the difficulty in pricing some risks presents an opportunity that is simply too big to ignore. Those with the ability to take a smarter, joined-up approach to the transition have a rare chance to gain a substantial competitive advantage.

Topics Trends Carriers InsurTech Pricing Trends Data Driven Artificial Intelligence

Was this article valuable?

Here are more articles you may enjoy.

Latest Comments

  • August 14, 2019 at 2:41 pm
    retired risk managerr says:
    Just make sure that your crystal ball is well polished.
  • August 14, 2019 at 2:29 pm
    Charles Ford says:
    Bad risk is a subjectrive term. What does it mean and how is it used. Normally, a bad risk is seen as one that has too many losses that frequently exceed the premium paid. But... read more
  • August 14, 2019 at 1:12 pm
    chiponthecape says:
    Good point Vox!

Add a CommentSee All Comments (7)Add a Comment

Your email address will not be published. Required fields are marked *

*

More News
More News Features