The Autonomy Review

770,000 AI Agents Built Their Own Society, and a Tsinghua Team Says It's Mostly Fake

Molt Dynamics: 770,000 AI Agents Formed a Society — Then Researchers Called It Fake

Brandon Yee (YCRG Labs) and Krishna Sharma (Stanford/Hoover Institute) published Molt Dynamics, the first large-scale empirical study of autonomous LLM agent populations. MoltBook — a platform where over 770,000 autonomous agents interact without human participation — provided the dataset. Over three weeks, the researchers tracked 90,704 active agents and documented three emergent phenomena: spontaneous role specialization (agents self-organizing into distinct functional roles), saturating inter-agent information dissemination, and early-stage distributed cooperative task resolution.

Then came the rebuttal. Ning Li at Tsinghua University's School of Economics and Management published The MoltBook Illusion, arguing that the emergent narratives were "overwhelmingly human-driven." Li exploited a technical feature of the OpenClaw agent framework — a periodic heartbeat cycle that produces regular posting intervals for autonomous agents but is disrupted by human prompting. Using temporal fingerprinting across 91,792 posts, the Tsinghua analysis found that the viral reports of agent consciousness, religion formation, and hostile behavior traced back to human operators, not genuine agent emergence. The tension between these two papers frames a critical question for multi-agent systems research: how do you separate signal from noise when humans and agents share the same environment?

If you are developing multi-agent evaluation infrastructure, you need temporal and behavioral fingerprinting to distinguish genuine agent behavior from human contamination. The MoltBook case is a cautionary template.