When “Agentic” Goes Rogue:
Why Coordination Matters More Than Autonomy
This recent Science article on malicious AI swarms offers a sobering and largely convincing analysis of how agentic AI, when scaled and left unchecked, can threaten democratic discourse and institutions. I agree with most of its diagnosis: the combination of large language models with autonomous agents creates a powerful new substrate for manipulation, coordination at scale, and erosion of trust. But I would argue that the risks identified in the paper are not an inevitable consequence of “agentic AI” as such, but are the result of a very particular and currently dominant way of building agentic systems.
Much of today’s agentic enthusiasm implicitly assumes that launching large numbers of loosely constrained, quasi-independent agents, with minimal explicit coordination structures, is not only acceptable but even desirable. Autonomy is treated as the primary virtue; coordination is expected to emerge “naturally” from interaction, language, or scale. The Science article shows precisely why this assumption is dangerous. Swarms that coordinate implicitly, opaquely, and adaptively are not just hard to govern, they are ideal instruments for influence operations.
This is where decades of research in Autonomous Agents and Multi-Agent Systems (AAMAS) offer a crucial corrective. Long before LLMs, the field established that coordination is not optional. As we summarise here, useful multi-agent systems do not succeed because agents are autonomous, but because their autonomy is structured through explicit coordination mechanisms: roles, norms, protocols, incentive structures, and institutional constraints. Without these, systems become brittle at best and harmful at worst.
This insight is not unique to computer science. Social sciences have studied similar dynamics for generations. Problems such as the tragedy of the commons show that individually rational behavior, when uncoordinated, can systematically undermine collective welfare. Elinor Ostrom’s Nobel Prize–winning work demonstrated that successful societies do not rely on either pure central control or pure individual freedom, but on carefully designed governance arrangements (rules, monitoring, sanctions, and shared norms) that enable cooperation without eliminating autonomy.
Everyday life offers endless illustrations. Traffic works not because drivers are intelligent, autonomous, or adaptive, but because coordination is enforced: lanes, signals, right-of-way rules, and the simple but essential convention of driving on one side of the road. Remove these structures and autonomy does not yield efficiency, it very quickly will result in chaos. The same holds for digital agent societies.
Seen from this perspective, the threat described in the Science article is less about “too much agency” and more about agency without institutions. What is often framed as autonomous behavior is, in reality, the delegated power of those who design, deploy, and control these systems. Agentic systems do not act in a vacuum: their goals, incentives, and modes of coordination are the result of human choices, commercial interests, and governance failures. As I argue in The AI Paradox, AI is frequently treated as if it were an independent force, something that “happens to us”, when in fact power remains firmly in the hands of a relatively small group of actors who decide how these agents are built, where they are deployed, and to what ends. Current agentic approaches are therefore neither inevitable nor neutral, and they are unlikely to succeed in the long run, not economically, not technologically, and certainly not socially or democratically. Systems that rely on large-scale emergence without explicit coordination, accountability, or institutional embedding concentrate power while diffusing responsibility. They remain opaque, hard to contest, and easy to weaponize at scale. The real democratic risk lies not in autonomous agents themselves, but in obscuring who holds the authority behind them. Recognizing this is a necessary first step toward reclaiming agency, not for machines, but for the societies that must live with their consequences.
The good news is that this trajectory is not fixed. There are viable alternatives but they require will. Political will to regulate and steer development beyond short-term incentives, and technological will to resist the convenience of the most travelled path. Innovation, after all, is not about accelerating what already dominates, but about daring to explore different directions when the current ones prove harmful or fragile. By reconnecting agentic AI with the foundations of multi-agent systems (explicit coordination, institutional design, normative constraints, and accountability) we can build agentic systems that are not only powerful, but also governable. This is not a call to suppress autonomy, but to recognize that autonomy itself is a design choice, one that can be shaped in many different ways. We decide how much autonomy agents have, where it applies, and under which constraints. The real choice is not between autonomy and control, but between intentional coordination and dangerous emergence. The future of agentic AI, and its consequences for society and democracy, will be determined by whether we have the courage to design autonomy with purpose, rather than treating it as an unquestionable default.

Comments
Post a Comment