The CHAT Act: Well-Intentioned & Missing the Mark

Senator Jon Husted recently introduced the Children Harmed by AI Technology (CHAT) Act of 2025. The stated goal is clear: protect minors from harmful interactions with AI companion chatbots. The bill would require operators to implement strict age verification, block sexually explicit content for minors, and notify parents if a child expresses suicidal ideation. Taken at face value, this is a noble endeavor. However, the reality of the proposal is one that risks creating a privacy and compliance nightmare. 

WHAT DOES ‘CHAT’ DO? 

For starters, the proposal contains an overly broad definition of a companion AI chatbot, one that would capture any software “simulating interpersonal or emotional interaction, friendship companionship, or therapeutic communication with a user”. Every major chatbot service out there, ranging from Copilot to Gemini and Claude, incorporates interpersonal interactions. In fact, OpenAI tweaked the personality of ChatGPT to be warmer in part because of feedback from users finding the service to be too formal. The human-like interaction is a feature, not a bug.

Given the overly broad definition, CHAT would potentially encompass not only the popular chatbots consumers are using today, but also extend itself into novel arenas, like that of the video game industry. Many video game titles, such as Baldur’s Gate and Covert Protocol, have included non-player characters (NPCs) that leverage AI to inform their interactions with players based on the choices players make. Even the language-learning app, Duolingo, uses a GPT-powered chatbot feature in its premium subscription for users to practice conversation in whatever language they’re learning. 

There needs to be a far more narrow targeting to make sure that any proposals do not have spillover effects that impact unrelated products and services that consumers enjoy. 

The proposal also mandates commercially available age verification of all chatbot users, with parental account linkage for minors. While the bill demands that the data collection be limited to whatever is “strictly necessary” to carry out the process, the reality is that with age verification, even that limited data flagged as strictly necessary is extremely sensitive and leaves consumers exposed to privacy risks. 

A LOT THAT COULD GO WRONG

That risk is not hypothetical; it is very real. Just last month, the popular app known as “Tea”, which serves as a space for women to communicate with one another about potential matches on the app, experienced a massive data breach, with sensitive user data such as driver’s licenses, direct messages, and selfies being leaked. 

Furthermore, incorporating such a regulatory regime also subjects online users to a culture of surveillance and presents a threat to free speech rights. 

The United States needn’t look further than the U.K. to understand the serious risks that come from mandating age verification for online services. While the US would theoretically not go as far as arresting someone for the content of their speech online, there is a chilling effect associated with eroding anonymous speech. It is no surprise that users have widely utilized VPNs to circumvent the age verification requirements of that law, and it would be fair to expect a similar outcome in this case. 


More from James Czerniawski on the UK and age verification

This is why the Consumer Choice Center has signed onto a coalition letter led by the Taxpayers Protection Alliance on the subject, as they’ve highlighted many of our concerns, which, if left unaddressed, would lead to poor outcomes for consumers.

THE UTAH WAY

Chatbots and the AI technology that powers them are here to stay. They are tools that are incredibly powerful and can help kids learn critical skills for the jobs of tomorrow that AI will provide. Like any technology, they are not without potential downsides. 

However, existing laws can address potential harms, and lawmakers should let case law develop organically. In the event we see that the laws are not holding bad actors accountable, then lawmakers can pursue appropriate legislation to remedy narrowly identified harms. At the state level, Utah chose a different approach to AI therapy chatbots, leveraging its AI Learning Lab to produce a light touch and reasonable legislative solution, setting guardrails and rules of the road for companies offering services in that space. Lawmakers should explore how to leverage such labs and regulatory sandboxes to establish a correct “Lawmakers should explore how to leverage such labs and regulatory sandboxes to establish a correct process to understand technology and produce narrowly tailored solutions where appropriate.

Share

Follow:

More Posts

Subscribe to our Newsletter