More than two-thirds of independent agencies plan to increase use of artificial intelligence in the next 12 months, though just 8% said AI is currently embedded in their daily workflows, according to a new report from The Big “I” Agents Council for Technology.
In its annual tech trends report, ACT found that 38% of survey respondents are “very likely” to increase AI use, while 30% are “somewhat likely” to do so over the next year.
Operational efficiency (60%) and staff productivity (52%) were cited as leading motivations for adopting AI. Data privacy or compliance risks (24%) and inaccurate outputs (22%) topped the list of concerns.
Kasey Connors, executive director of ACT, called the report’s findings “a pivotal moment” for independent agencies. With growing AI interest signaling momentum, “long-term success hinges on clearer governance, stronger training, and more integrated technology strategies,” she said in a statement.
Nearly a third of the national survey’s respondents (31%) said they are not currently using AI, while 33% described themselves as “just experimenting” with the technology. Another 22% said they are using AI only in limited areas.
ACT pointed to “a growing gap” between the promise of what AI can deliver and the operational readiness required to implement it responsibly and effectively. The council highlighted several constraints, including a lack of documented processes, vendor-related confusion, resource and budget limitations, portal and multifactor authentication fatigue, security and governance gaps, as well as change fatigue and tool sprawl.
“AI is entering agencies at a time when many are already struggling with disconnected systems and limited automation,” Connors said. “That complexity makes it harder to move from experimentation to meaningful impact.”
“What we hear consistently is that agents aren’t worried about the price of AI–they’re worried about the cost of getting it wrong. Data privacy, compliance, and accuracy have to be addressed before agencies are comfortable scaling AI use,” she said.
Beyond privacy and compliance concerns, 17% of survey respondents said their top worry about using AI tools is losing human touch, while 16% said they don’t know how to apply the technology.
About half of survey respondents (45%) reported already using ChatGPT and other public large language models. Far fewer agencies said they use AI in policy comparison tools (20%), marketing tools (18%), chatbots and virtual assistants (13%), or document and data extraction tools (13%).
“AI in its current form should be treated like a junior colleague,” ACT said in the report. “Although it’s fast and capable of consuming large volumes of information, it still requires supervision for complex or high-impact decisions.”
The council’s survey found that only 13% of agencies have a formal AI policy. More than half (56%) do not have a policy, and almost 44% reported relying on peer-to-peer training on new tech tools or systems.
“That will have to change in the coming year for agencies to close significant security and liability gaps,” ACT said in the report.
Topics Trends InsurTech Data Driven Agencies Artificial Intelligence Numbers Independent Agencies
Was this article valuable?
Here are more articles you may enjoy.

Kyle Busch and Wife Settle Lawsuit With Pacific Life and Insurance Agent
Zurich Insurance and Beazley Agree to $10.9B Cash Acquisition
Marsh, Aon in Talks With US on Insuring Tankers in Hormuz
Lloyd’s Market Engaging With US Government Over Gulf Maritime Plan, Officials Say 


