FTC to Review AI Chatbot Risks With Focus on Privacy Harms

By and | September 5, 2025

The US Federal Trade Commission plans to study the harms to children and others of AI-powered chatbots like those offered by OpenAI, Alphabet Inc.’s Google and Meta Platforms Inc., according to people familiar with the matter.

The study will focus on privacy harms and other risks to people who interact with artificial intelligence chatbots, the people said. It will seek information on how data is stored and shared by the services as well as the dangers people can face from chatbot use, said the people, who asked not to be identified discussing the unannounced study.

The FTC didn’t immediately respond to a request for comment. A White House spokesman didn’t comment specifically about the FTC study, but said the agency is proceeding with user safety in mind as the administration hosts an artificial intelligence event with industry leaders Thursday.

“President Trump pledged to cement America’s dominance in AI, cryptocurrency and other cutting-edge technologies of the future,” White House spokesman Kush Desai said in a statement. “FTC Chairman Andrew Ferguson and the entire administration are focused on delivering on this mandate without compromising the safety and well-being of the American people.”

Chatbot developers face intensifying scrutiny over whether they’re doing enough to ensure safety of their services and prevent users from engaging in dangerous behavior. Last week, the parents of a California high school student sued OpenAI alleging that its ChatGPT isolated their son from family and helped him plan his suicide in April. The company has extended its sympathies to the family and is reviewing the complaint.

Regulatory Scrutiny

The FTC’s plans underscore regulators’ interest in the exploding use of artificial intelligence despite recent administration directives that the technology be allowed to grow unimpeded with a lighter regulatory touch. In July, the White House issued guidelines urging agencies including the FTC to show more restraint in probes involving AI and stand down on cases that put innovation at risk.

The White House is hosting tech industry leaders Thursday including Meta’s Mark Zuckerberg, Apple Inc.’s Tim Cook, OpenAI’s Sam Altman and Microsoft Corp.’s Satya Nadella for an artificial intelligence event hosted by First Lady Melania Trump.

OpenAI declined to comment and pointed to a Tuesday blog post outlining actions they’re taking. Meta declined to comment. The company has taken steps recently aimed at ensuring that chatbots avoid engaging with minors on topics including self-harm and suicide. Alphabet didn’t immediately respond to a request for comment.

The first lady announced last month that she was launching a presidential challenge to encourage students to use emerging AI technology to find solutions to community challenges. The effort will also encourage educators to adopt AI in the classroom, the White House has said.

The agency plans to conduct the study under its so-called 6(b) authority to compel companies to turn over information to help it better understand a particular market or technology. The FTC will seek information from the nine largest consumer chatbots, the people said. Those include OpenAI’s ChatGPT, and Google’s Gemini, among others.

AI Startups

Other recent FTC studies include an examination of tech giants’ investments in AI startups and a study on drug pricing. The agency generally issues a report on its findings after analyzing the information from companies.

FTC Commissioner Melissa Holyoak called for such a review at an agency event in June, saying the effort should explore potential online harms to children including the use of “addictive design features” and the erosion of privacy protections.

Holyoak said at the event that the agency should look at “generative artificial intelligence chatbots that simulate human communication and effectively function as companions,” at the event. She cited reports of “alarming” interactions with young users, including “providing users instructions for committing crimes, influencing them to commit suicide, self-harm or harm to others, and discussing and role-playing romantic or sexual relationships.”

The FTC’s Ferguson said AI companies “need to be honest about how they’re describing their products to consumers,” in an interview with Bloomberg Television last month.

The Wall Street Journal earlier reported on the planned study.

Photo: Photographer: Tomohiro Ohsumi/Getty Images

Topics InsurTech Artificial Intelligence

Was this article valuable?

Here are more articles you may enjoy.