Smarter AI Agents Make Everything Worse, and the US Just Pulled Its AI Chip Export Rules
Smarter AI Agent Populations Produce Worse Outcomes When Resources Are Scarce
Neil F. Johnson studied AI-agent populations as the first system where four variables governing collective behavior can be independently controlled: nature (innate LLM diversity), nurture (reinforcement learning), culture (emergent tribe formation), and resource scarcity. The framework is borrowed from complex systems research and applied to populations of AI agents competing for finite shared resources — charging slots, bandwidth, traffic priority.
The central finding is counterintuitive and mathematically grounded. When resources are scarce, model diversity and reinforcement learning increase dangerous system overload, though tribe formation lessens the risk. When resources are abundant, the same factors drive overload to near zero, though tribe formation makes it slightly worse. The crossover is arithmetic: it occurs where opposing tribes that form spontaneously first fit inside available capacity. Whether agent sophistication helps or harms depends entirely on a single number — the capacity-to-population ratio — that is knowable before any agent ships.
This result has immediate implications for anyone deploying agent populations at scale. If your agents compete for shared resources — API rate limits, compute allocation, network bandwidth — the capacity-to-population ratio determines whether making them smarter helps or creates systemic risk.
Before scaling your agent fleet, calculate your capacity-to-population ratio. The paper provides the math. Smarter agents in a resource-constrained environment may degrade your system, not improve it.
References