Viewpoint: AI Is Changing the Cyber Risk Landscape

By Mark Millard | April 1, 2024

Innovation is at the heart of the human experience, with a constant stream of new ideas and inventions changing the way we conduct business. But seismic shifts in technology don’t come all at once, or even at a steady pace—they tend to come in waves. During such periods of gradual advancement, organizations can update their resilience strategies to get in front of risks on the horizon and anticipate cyberthreats. In this current cycle of emergent technology, the sophistication of threats powered by generative Artificial Intelligence (AI) means the scale and complexity of cyberattacks may be outpacing an organization’s risk response mechanisms.

The widespread uptake of AI is an example of one of these rapidly transformative moments, and everyone from individuals to organizations and regulators needs to be prepared for the opportunities on offer and the risks they pose. The recent proliferation of lawsuits concerning the use of AI across industries including healthcare, publishing and commercial real estate shows that while AI can be harnessed to drive growth, it also comes with heightened risk if deployed without appropriate guardrails and guidance.

Risk Changes as Technology Advances

Since the earliest days of the internet, people have been warned to be careful about what they say, and to whom, when interacting with strangers online. In the realm of business, the stakes can be extremely high from a financial standpoint, with large transactions routinely conducted online and large amounts of commercially sensitive information stored and transmitted digitally. As cutting-edge forms of AI – like generative AI – enter our everyday lives, new and highly sophisticated threats are further complicating the cyber landscape.

Open-source AI models are a useful tool when approached with working knowledge of how they function. For example, a group of software engineers from a leading technology company unwittingly became a cautionary tale when they tested their unreleased proprietary code by entering it into ChatGPT. Unaware that by entering their own data into the chatbot they were providing the chatbot’s parent company with full ownership and access, their code became freely available to anyone using the service and could be viewed by competitors or adapted by anyone – including cybercriminals.

This example demonstrates two of the pitfalls associated with rapid advances in modern technology faced by today’s organizations. The tools themselves may contain flaws that increase risk, or as is more common, the way human users interact with these new tools may expose risk.

Dual Risks: Technology and Users

Any time an organization adopts new technology, that organization inherently opens itself up to risk by introducing a new set of unknowns to their business practices. Allowing the wrong users access to a program, for example, or flaws in the program’s code, are technological issues that can create security vulnerabilities which need to be addressed by IT and cybersecurity professionals. The practice of hacking – where cybercriminals use code to break through an organization’s cybersecurity systems – is increasingly difficult, but the sudden ubiquity of AI offers a new way to create vulnerabilities by targeting system users with lifelike dupes.

Emails that look genuine but are designed to extract important security credentials are not a new phenomenon, but generative AI has allowed new, sophisticated forms of phishing attacks to proliferate on an unprecedented scale. Deepfakes are a convincing new form of cyberattack where criminals develop highly convincing visual and auditory assets to impersonate others. Recently in Hong Kong, an employee of a multinational firm believed they were on a conference call with several colleagues and company directors, when in fact it was a deepfake scenario set up by cybercriminals. The employee was persuaded to wire the equivalent of $25 million to the criminal syndicate.

If we think of an organization’s data as its crown jewels, we might also think of its cybersecurity measures as the castle walls that protect it. But as AI tools become better at imitating human speech and thought patterns, it’s no longer enough to keep an eye out for the battering rams and ladders cybercriminals might use to breach the walls – organizations must take extra care to ensure employees don’t unwittingly open the gates themselves.

Preparing for New Risks

There are no “silver-bullet” solutions for organizational risks posed by new technologies, but there are measures companies can take, no matter what their business domain, to update and maintain their resilience strategies.

These include:

  • Creating a working cyberthreat group spanning departments that works proactively to align risk strategy across technical, financial, and leadership teams.
  • Monitoring resource allocation and budgets. As a standing agenda item, the group should review the acquisition and development of new technology to ensure company-wide awareness.
  • Expanding organizational networks to leverage third-party service providers that can provide updates on evolving threats and potential treatment.

These actions can help risk managers and their organizations take a proactive approach to incident response, even before a major cyber event.

Regulatory Scrutiny

The uptake of AI has so been rapid and widespread that our lives are affected by AI every single day, whether we are aware of the fact or not. This had led to increased scrutiny from regulatory bodies looking to install guardrails for the responsible use of AI and to protect users. The White House published its Blueprint for an AI Bill of Rights with its focus on “making automated systems work for the American people” and in mid-December 2023, the Securities and Exchange Commission’s new cyber disclosure rules came into effect. Under these rules, organizations are now obliged to report on material cyber incidents on 8-K within four days of the event, and must describe the nature, timing and impact of the incident. They must also disclose how cybersecurity risks could materially impact business, and governance processes for risk management, among other measures, and face fines for breaches and non-compliance.

Lawsuits On The Horizon

Recent court rulings could potentially allow for an increase in litigation when a cyber breach occurs. The 2nd U.S. Circuit Court of Appeals ruled that plaintiffs can sue an organization in the event of a data breach if their information is accessed by a third party, even if the information wasn’t misused. What this means is that a lack of material loss may not preclude a plaintiff from suing a company – if cybercriminals managed to access private data in a cyberattack, that may be grounds enough for a lawsuit.

Individuals may also be at risk for litigation in a business context. A survey of recent SEC enforcement indicates that the regulator is willing to hold individuals accountable for cyber incidents. If an employee is deemed responsible for shortcomings that lead to a data breach, or if they did not follow adequate disclosure policies, they may be subject to large civil penalties and even criminal charges.

Emerging Risks and Insurance

In this context of heightened government scrutiny and legal action, it is essential to understand how insurance policies can respond to emerging technology risks. As a relatively new form of insurance in a rapidly changing field, cyber insurance does not currently have a standardized form. Different carriers offer different coverages, terms, exclusions, conditions and endorsements. Organizations should regularly undertake cyber risk assessments and ensure they have adequate cyber coverage for the organization and its directors to cover any liabilities.

Organizations should also regularly consult with trusted professionals about emerging and current technology risks to understand how insurers view the current risk landscape. The more an organization knows about disparate cyber threats, the more adequately it can adjust its coverage to reduce risk exposure, while updating resilience protocols.

Of course, thwarting cyber incidents before they occur is the preferred course of action for organizations and their stakeholders, but building a strong resilience plan and putting a strong insurance policy in place is not just a contingency, it is a necessity.

Considering a Policy

The advance of technology and the stakes involved mean that when it comes to the question of data breaches, it’s no longer a question of “if” but “when.” At some point, cybercriminals will successfully perpetrate an attack that results in an insurance claim.

A strong insurance policy will provide the right coverage for an organization. As risk managers and their teams build this policy, they should assess the following criteria:

  • The terms and conditions in the potential policy, paying special attention to exclusionary language.
  • The range of coverage to be provided, in relation to the risk profile and policy’s cost.
  • Coverage available from other insurance policies.

When considering a policy, it is recommended that the risk management team request complete copies of the insurance policies on offer — specimen forms and the full endorsements that would be included — as well as adequate time to evaluate the terms offered.

The New Normal

New technologies disrupt our world, but over time their presence becomes normalized in our daily lives and their uses become regulated and understood. Deepfakes created with generative AI might be the latest threat deployed by cybercriminals, but they certainly won’t be the last. Keeping a watchful eye on emergent technologies and undertaking regular risk assessments is just the first step toward mitigating cyber risk. Pairing a strong operational resilience strategy with an insurance policy to match will help your organization be better prepared for the next time a bad actor is ready to strike.

Topics Cyber InsurTech Data Driven Artificial Intelligence

Was this article valuable?

Here are more articles you may enjoy.