With a growing share of insurance executives viewing generative artificial intelligence as a tool for streamlining and improving functions like fraud detection, Deloitte predicts that AI technologies could save the property/casualty insurance industry tens of billions of dollars in the next few years.
Deloitte wrote in a report that by implementing AI-driven technologies across the claims life cycle and integrating real-time analysis from multiple modalities, P/C insurers could reduce fraudulent claims and save between $80 billion and $160 billion by 2032. Insurers that integrate multimodal capabilities using AI and advanced analytics could generate potential savings of 20% to 40%, depending on the implementation, type of insurance and sophistication of fraud detection systems, according to the report.
“Multiple techniques such as automated business rules, embedded AI and machine learning methods, text mining, anomaly detection, and network link analysis could score millions of claims in real time,” Deloitte said in the report. “Combining data from various modalities, such as text, images, audio, and video, could help identify patterns and anomalies and enhance the investigative process by reducing false positives, increasing detection rates of fraudulent claims, and saving on costs associated with fraud investigations.”
Soft fraud, which involves inflating a legitimate claim and accounts for 60% of all incidents, currently has a detection rate between 20% and 40%, Deloitte data shows. Hard fraud—characterized by taking premeditated actions to create false claims—accounts for 40% of claims fraud. It has a detection rate between 40% and 80%. In a June 2024 survey conducted by Deloitte, 35% of insurance execs chose fraud detection as one of the top five areas for developing or implementing gen AI applications over the next year.
Kedar Kamalapurkar, managing director and a leader in the insurance sector claims practice at Deloitte Consulting LLP, said AI technology can address hard and soft claims fraud and help to prevent claims.
Hard Fraud
Kamalapurkar explained that as technology advances, committing hard fraud can become easier. Generative AI has the potential to create images that are difficult to detect. Whether by replicating damage or making old damage look new, the phony images could go unregistered by AI systems from vendors that create estimates based on photos.
Adding digital fingerprints to accepted images is one way Kamalapurkar has seen claims technology begin to catch up. The fingerprints serve as DNA stripes that alert companies when duplicate images are submitted. Kamalapurkar has also seen vendors share fingerprints across client bases to broaden detection.
“Those companies have existed for a couple of years,” he said, “but I think the ability to integrate them into the process without adding a lot of extra time—that’s what allowed it to become more efficient.”
He didn’t recall seeing an AI model that could accurately determine if an image was real or fake across more than a subset of vehicle types as recently as six or eight months ago. Since then, the models have become more effective at detecting deepfakes, he said.
“As you get more virtual, it increases the probability that you are experiencing some of this,” he said of claims fraud related to image deception. “And our ability to detect it is going up. And AI is really going to be critical because it can find almost the pixel-level … variation in a photograph. Or detect that the entire photograph itself is generated by artificial intelligence.”
Soft Fraud
Proving soft fraud is more difficult, but AI offers some hope for claims investigators.
Kamalapurkar said AI models have the power to pull together disparate pieces of information that previously required much more cost and effort. This data can provide insights that previously required more work and higher costs, such as tapping into advanced sensors and video cameras to determine what happened in an accident, who was at fault and how the impact ties to injury causation. To do this, companies are tying automobile impact data from vehicles, estimates and damages to medical information, he said.
“There’s models that exist in the market that more objectively, and backed by medical science, prove what is likely to have happened versus not,” Kamalapurkar said. “It still requires this thought of, ‘If I go to a jury, is anyone going to care?’ And that’s what the debate is.”
Regardless of any debates on how valuable the technology will ultimately prove to be in a court setting, these data insights provide a deeper perspective on a claim. And even if they do not stop the fraud, Kamalapurkar sees the implementation of this kind of technology as a deterrent. He said implementing anti-fraud technology raises the barrier to entry for hard and soft fraud, and sends fraudsters to other targets.
Prevention
Kamalapurka also believes AI can prevent claims by curating personalized information and alerts to policyholders: Combining historical claims data from an insurer and sensor information from inside a car can prompt recall or service discussions on an individual basis. Location information can lead to encouraging drivers to park in lots that are home to fewer accidents.
“I think my broad thought on AI in this case is AI plus human is going to be better than human alone or AI alone,” Kamalapurkar said. “Because you need context and experience that both of them would have.”
Was this article valuable?
Here are more articles you may enjoy.