The AI Paradox: Why the future of AI is ours to choose
On my upcoming book "The AI Paradox: How to Make Sense of a Complex Future". Princeton University Press, to appear Feb 17, 2026
I have been working in artificial intelligence since the late 1980s, long before AI became something people used daily or feared in headlines. Over the decades, I have watched the same cycle repeat itself: excitement, inflated expectations, disappointment, and anxiety. What has changed is the technology. What has not changed nearly as much are the questions we ask about it.
Today, AI is embedded in public services, workplaces, political decision-making, and global power relations. Yet the way we talk about it still oscillates between exaggerated promises and catastrophic warnings. Both perspectives obscure something fundamental: AI is not an autonomous force shaping our future. It is a set of systems and models created, deployed, and governed by people. The future of AI is therefore not inevitable. It is a matter of choice.
This is why I wrote The AI Paradox. Rather than offering predictions or roadmaps, the book is structured around paradoxes. Predictions about AI age badly. They tend to overestimate what is imminent and underestimate the social consequences of what is already here. Paradoxes, in contrast, endure. They capture tensions that persist regardless of technical progress.
AI is full of such tensions. The more capable AI becomes, the more visible the limits of automation are. The more decisions we delegate to machines, the more we depend on human judgment. Efficiency does not replace responsibility; it increases the need for it. These are not problems that can be solved once and for all. They are tensions that must be continuously navigated.
One of the most persistent misunderstandings in current AI debates concerns human intelligence. What is often underestimated is not creativity or empathy in isolation, but the way human intelligence integrates social understanding, moral judgment, and responsibility. AI systems can recognize patterns, optimize outcomes, and simulate human expression. But they do not understand meaning, nor can they be responsible for consequences. Humans can. And must.
As AI systems become more capable, this distinction becomes sharper, not weaker. Contextual judgment, ethical reasoning, and accountability are not features we can simply add on to machines. They are foundational human capacities, and they become more important, not less, as AI advances.
These issues are inseparable from questions of governance. We still lack a shared understanding of what AI actually is, and this lack of clarity has consequences. When AI is treated as a vague, all-encompassing concept, it becomes an empty signifier. And this is useful for those avoiding accountability and framing certain technological paths as inevitable. Effective governance does not require a single rigid definition, but it does require clarity about what exactly is being regulated and why.
The same applies to debates about bias and fairness. Bias is not a technical anomaly that can be eliminated with better data or smarter models. All systems reflect normative choices about what to measure, what to optimize, and what to ignore. Justice is not a statistical property. An AI system can reduce measurable bias and still produce unjust outcomes if it ignores historical inequality, social context, or lived experience. Deciding what is fair is a moral and political responsibility, not a computational one.
Power runs through all of these questions. A small number of actors currently hold disproportionate influence over how AI is developed, deployed, and discussed. This concentration of power shapes markets, governance structures, and public narratives. Without deliberate intervention, AI will amplify existing inequalities rather than challenge them.
For this reason, I am skeptical that superintelligent machines are the most urgent threat we face. The real risk is that humans abdicate responsibility by treating AI as an autonomous decision-maker. Delegating authority without accountability is not progress. Our most pressing challenges—climate change, inequality, democratic erosion—are not problems of insufficient intelligence. They are problems of governance, coordination, and collective will.
This is the central paradox: the more we pursue technological solutions, the more essential human judgment, cooperation, and courage become.
AI does not determine our future. We do.
Comments
Post a Comment