AI is a choice, not a solution

AI is a choice, not a solution

Image by Istvan Brecz-Gruber from Pixabay


2026 is only a couple of weeks in, and AI is forcing concrete political choices. Governments have blocked systems like Grok over real-world harm [1], while energy and housing policies are being reshaped to accommodate power-hungry data centres, making clear that AI deployment competes with social priorities rather than floating above them [2]. 

These are reminders that AI is not a solution to social problems, but a tool whose impacts depend entirely on human judgment, governance, and responsibility. I mean a tool in the strict sense: not an agent, not intentional, not moral. It does not decide, understand, or care. People do. And as such it does not replace human judgment, ethics, or responsibility.

There is no technology fix for inequality, sustainability, or justice.
These are political and social challenges. 
AI only amplifies existing structures, be these fair or unfair, depending on how we choose to design and deploy it. The direction is ours to choose. 

AI is neither intelligent nor neutral.
It reflects human decisions, values, and blind spots. Every dataset excludes someone. Every model prioritizes something. Every system consumes energy, materials, money, and human labor that could have been used elsewhere. These are trade-offs - not inevitabilities. 
If we want better AI, we must first build fairer societies.

AI is not inevitable.
National strategies, research agendas, and innovation paths are political choices. Societies can decide when, why, and whether AI should be used at all, that is, have the right, or even more, the responsibility, to shape AI according to their values, not import someone else’s priorities by default.

AI is not virtual.
It has a footprint: data centers, energy use, rare minerals, invisible workers. Sustainability is not an add-on. If AI is not sustainable and fair by design, it is a failure, no matter how “innovative” it looks.

AI does not arrive like the weather.
We build it. We deploy it. We benefit from it, or suffer its consequences. Responsibility cannot be outsourced to algorithms.

Governance is not the enemy of innovation.
It is what makes innovation trustworthy, durable, and legitimate. Sometimes the most responsible, and innovative, decision is not to automate, but to protect human judgment where it matters most.

Clear rules do not slow progress.
They create the trust that progress depends on.

Responsible AI starts with Question Zero:
Not what can we do with AI? But should we use AI here at all, and why?

The real challenge is not building more AI. It is building more trust, inclusion, and shared understanding, and ensuring AI serves humanity, not the other way around.

AI will not determine our future. The choices we make about where to deploy it, where to refuse it, and what costs we are willing to accept will. Responsibility for those choices lies with people and institutions, not with technology.


[1] See:

[2] See

Comments