The Autonomy Review

Congress Wants Your AI Queries, and Cloudflare Just Built the Agent Operating System

Congress Wants Counterterrorism Agencies to See Your AI Chatbot Queries

The Washington Post reported on April 20 that House Homeland Security Committee Chair Andrew Garbarino (R-NY) is pushing for increased visibility into AI chatbot interactions to detect potential terrorist activity. Garbarino plans to fold this proposal into the ongoing negotiations over the national AI framework, the legislative blueprint the White House released in March. The proposal would require AI companies to flag suspicious queries to counterterrorism agencies, expanding the current content-moderation paradigm from social media platforms to AI systems. (Washington Post)

The timing is not accidental. The national AI framework negotiations are the most consequential AI policy process in Washington right now, and Garbarino chairs the committee with jurisdiction over domestic security threats. Inserting chatbot query monitoring into that process transforms it from a fringe surveillance proposal into a potential provision of the first federal AI law. The Washington Post also reported that Anthropic appears unlikely to win its D.C. lawsuit against the Department of Defense over its "supply chain risk" designation, setting up a potential Supreme Court battle. Together, these developments show the federal government simultaneously expanding its surveillance reach into AI interactions and tightening its grip on which AI companies can serve the national security apparatus.

The implications for agent builders are direct. If AI companies are required to monitor and flag queries, every API call an agent makes could fall within the monitoring scope. Agents that autonomously query chatbots on behalf of users would create orders of magnitude more flaggable interactions than human users do manually. The infrastructure for compliant monitoring does not exist in most agent architectures.

Governance signal: Query monitoring for AI chatbots is entering the same legislative vehicle as the national AI framework. Companies building agents that interact with AI systems should track this provision closely; it could impose logging and reporting requirements on any system that sends queries to a foundation model API.