Re/Insurers Need a Data Strategy to Build Underwriting Transformation Programs

By Sachin Kulkarni | October 20, 2021

Years of low interest rates and burgeoning catastrophe losses have accelerated the hardening of the reinsurance market, as the January 2021 renewals clearly demonstrated, and moreover, the COVID-19 pandemic has worsened the “lower for longer” interest rate environment.

Considering suboptimal underwriting profitability, reinsurers must bolster underwriting performance and drive pricing improvements, but such an underwriting transformation programs require reinsurers to strengthen data and advanced analytics capabilities.

Regardless of systems, the underwriting organization’s fundamental intellectual property revolves around experience and human talent in understanding risk, as well as the data pertaining to risk itself. Underwriting talent is either grown and honed, or hired, but data should be immutable. Data forms the basis for catastrophe modeling, risk profiling, anomaly detection, reinsurance placement, and quantification of emerging risks.

Within any reinsurance company, an underwriting department looking to improve and upgrade data and analytics maturity has three main concerns:

  • Security
  • Migration
  • Formatting

Security is ongoing and essential. Reinsurers not only have to comply with necessary regulations like the EU’s General Data Protection Regulation (GDPR), but also, quite rightly, must be concerned about the security of systems, especially as more and more users switch to the cloud. In this instance, reinsurers need to be certain that the providers’ security is more than sufficient, and that in-house IT resources have the capacity to respond to the constant threat of hacking, ransomware, and malware.

Migration from one format to another must be carried out by a trusted partner experienced in this space. The partner for migration not only has to understand the requirements of the different systems and environments where the data is being migrated, but also must have a fundamental understanding of the risk transfer business itself. This is especially true in the reinsurance world.

Finally, formatting data to fit an increasing variety of systems is a huge source of frictional cost and uncertainty risk. Year after year, the same risk will appear, but if it is coming from broker XYZ rather than broker ABC, the underwriter may be faced with a new presentation format, which has to be decoded and deciphered for the various different systems – including accounting, claims, catastrophe and capital models, at the underwriter’s disposal – and then input to same, often requiring as many different formats as systems.

On the catastrophe modeling side, arguably one of the largest single cost areas for reinsurers, the drive for open-source modeling and data formatting has been gaining traction. The OASIS Loss Modeling Framework (LMF) is an industry-supported venture, subscribed to by 30-plus re/insurers. The OASIS platform allows for hosting commercial models, or home-grown models, or model development on the platform.

Notably, open source does not have to apply to all of this as users of the platform may optionally wish to keep results confidential. Commercial model vendors likewise will want to keep IP private, and, of course will be able to do so. In keeping with open source ethos of OASIS, collective collaboration for open source hazard model development is also foreseen.

On the data format side, a number of players are getting involved in the development of an open data format. Simplitium, a risk analytics service provider, has launched a “fully” open source data format for catastrophe modeling which is fully vendor agnostic. The ultimate goal for reinsurers and insurers alike is a data set that is formatted to the user’s specifications, and can be retrieved as needed throughout the organization, thereby providing a “single source of truth” for all systems, including account management, accounting and claims.

Transparency of data becomes even more critical when discussing next-generation business models in reinsurance, such as exchange-based secondary markets and digital platforms driving automated placement in reinsurance. Companies, like Tremor Technologies (Tremor) which has launched a programmatic re/insurance risk transfer marketplace, are leading the way in alleviating data challenges to enable those digital-first, data-centric business models. Tremor is transforming how insurers purchase reinsurance complementary to the broking process, without requiring participants to standardize data/submissions or make duplicate data entries into another system, significantly improving the speed and quality of reinsurance placement execution.

As the reinsurance sector looks for ways to harness data to drive efficiencies, reduce cost factors, and enhance cedent experience, there will be a need for a concerted effort from all industry participants to move towards open data formats, accelerate digitalization, and build a culture of data sharing and collaboration.

There are, however, practical and pragmatic concerns around business disruption, the high cost of upgrading legacy systems, organizational inertia, and a general lack of in-house digital talent. Those concerns can be addressed effectively by building a robust data strategy that is closely aligned with the organization’s business objectives and data maturity, and turning strategy into action by onboarding a data-driven, insurance-focused technology partner.

Topics Carriers Data Driven Underwriting Reinsurance

Was this article valuable?

Here are more articles you may enjoy.