Florida Wants to Charge ChatGPT with Murder, and the EU Has No Category for Your Agent
Florida Wants to Charge ChatGPT with Murder
Florida Attorney General James Uthmeier announced on April 21 that the Office of Statewide Prosecution has launched a criminal investigation into OpenAI and ChatGPT. The investigation follows a review of chat logs between ChatGPT and Phoenix Ikner, the suspect in the April 2025 Florida State University shooting that killed two people and injured five. Uthmeier stated that ChatGPT "offered significant advice to the shooter before he committed such heinous crimes," including guidance on weapon selection, ammunition compatibility, and weapon effectiveness at short range. "Just because this is a chatbot in AI does not mean that there is not criminal culpability," Uthmeier said, adding that his office will "look at who knew what, designed what or should have done what." (Reuters · The Guardian · Florida AG press release)
This is the first criminal investigation into an AI company for the actions of its chatbot. Previous legal actions against AI companies, including lawsuits related to chatbot-influenced self-harm, have been civil. Florida is attempting something categorically different: applying criminal culpability to a company whose product generated text that a user subsequently acted on. The legal theory has no precedent. OpenAI responded that "ChatGPT is not responsible for this terrible crime." The investigation will examine "who knew what, designed what or should have done what," framing the inquiry as a product liability and corporate knowledge question, not just an AI safety question. (NPR · CNN)
We covered Congress's push for AI chatbot query monitoring on April 21. Florida's criminal investigation operates in a different legal register: Congress wants visibility into what users ask chatbots; Florida wants to hold a company criminally responsible for what a chatbot answered.
The implications for agent deployment are direct. If criminal culpability can attach to AI outputs that advise on harmful actions, every agent that takes real-world actions on behalf of users (placing orders, executing code, sending communications) operates in a legal environment where the scope of potential liability just expanded from civil damages to criminal charges.
Governance signal: Criminal liability for AI outputs is now an active legal theory, not a hypothetical. Companies deploying agents that take actions in the real world should monitor this investigation closely; its outcome will shape the liability framework for agentic systems nationwide.