Fear, Futures, Responsibility: Rethinking the AI Debate

 Fear, Futures, Responsibility: Rethinking the AI Debate

still from https://www.youtube.com/watch?v=QiT2yK-5-yg

I recently reacted, rather negatively, to the interview with Yuval Harari at Davos 2026, My reaction was not quite on the points that it should have been. So let me restate my position more carefully.

My concern is not about silencing voices. It is about raising the level of precision and responsibility in how AI is discussed — especially in influential forums. When prominent thinkers frame AI in civilizational or quasi-mythical terms, the effect is not merely rhetorical. It shapes public understanding, policy priorities, and investment decisions.

A more useful intervention in this debate would:

  • Clarify what current AI is and is not
  • Distinguish speculative philosophy from technical reality
  • Connect AI to governance structures and institutional incentives
  • Ground claims in evidence rather than metaphor

So, what should we focus on?

1. AI is not an autonomous historical actor. 

AI systems do not have intentions, goals, or agency. They are designed artefacts embedded in economic and political systems. The real issue is therefore not “AI taking over”, but:

  • Who designs these systems?
  • Under which incentives?
  • With what oversight and accountability?

Treating AI as a historical force obscures human responsibility. Technologies do not shape history independently. Institutions and power structures do.

2. The risk is not machine consciousness, but over-trust in systems that lack understanding. 

Current AI systems detect statistical patterns. They optimize correlations. They do not understand meaning, context, causality, or moral consequence. And correlation is not comprehension. The danger lies in delegating decisions to systems that:

  • Cannot explain their reasoning in human terms
  • Cannot understand the social consequences of outputs
  • Cannot take responsibility

Over-trust in systems that lack understanding is a governance problem, not an existential metaphysics problem.

3. Existential hype diverts attention from real harms.

It serves the interests of some very powerful actors. Speculative narratives about superintelligence can intentionally or unintentionally:

  • Shift focus away from present risks (bias, opacity, labor displacement, environmental cost)
  • Legitimize rapid deployment under the argument that “we must race forward”
  • Serve the interests of powerful actors who benefit from urgency and deregulation

We should prioritize governance of systems already deployed , in welfare allocation, predictive policing, hiring, healthcare, education,  rather than hypothetical future entities. Current harms are not theoretical. They are administrative, structural, and political.

4. The “inevitability” narrative is politically convenient. 

When AI is framed as unstoppable destiny, two things happen:

  • Public agency weakens
  • Accountability diffuses

AI is not the weather, it does not 'happen to us'. It is designed, financed, regulated, or not, through human choices. We need reframe the discussion around:

  • Collective choice
  • Democratic oversight
  • Institutional responsibility

Technological determinism is not analysis; it is abdication.

5. Intelligence is socially grounded. AI is not. 

Human intelligence emerges from cooperation, shared norms, moral development, and social embedding. It is relational before it is computational. AI systems simulate outputs. They do not participate in moral communities. They do not experience responsibility, empathy, or accountability. The future hinges less on Artificial General Intelligence and more on:

  • Human cooperation
  • Regulatory capacity
  • Institutional design
  • Global governance

Superintelligence is not our central challenge. Coordination is. 

6. The real governance question is power, not apocalypse.

Ask yourself who controls data, infrastructure, compute, deployment pipelines? Concentration of AI capability in a small number of corporations and states poses immediate democratic questions. Fear of apocalypse obscures a more urgent issue: power asymmetry.

The core question is not whether AI becomes godlike. It is whether governance remains human and democratic.

Comments