An AI Winter’s Tale: Governing AI in a Pivotal Year
As this year draws to a close, it is hard to ignore the profound impact that artificial intelligence has had on public debate, economic strategies, geopolitical manoeuvring, and everyday practices. To grasp the speed of these developments, and in keeping with the season, let us imagine them in the form of a winter’s tale. Like all tales, this one carries a message that brings into sharper focus the choices and tensions we face today.
Once upon a time… there was a village, perhaps somewhere near the North Pole but not very different from our own, that decided to adopt an advanced technological system to help manage collective life. As the festive season approached, with so many tasks requiring coordination, they turned to a new helper: an “all-knowing” machine that promised to simplify what was becoming hard to manage. It would write letters, plan routes, give advice, draft speeches, and forecast the weather. It even claimed it could help with Christmas wish lists. Before long, it was embedded in routines of logistics, governance, communication, and risk management.
For a while, the system’s efficiency impressed everyone. The responses were immediate, confident, and delivered with convincing authority. But problems soon emerged. The system wrote an impeccable speech for the mayor… which mentioned a festival that had never existed. And it advised the bakery to produce only gingerbread cookies because historical data indicated they were the population’s favourite, even though, in reality, no one in that community ate them, and ginger was not even available. Gradually, the villagers realised that, despite its sophistication, the system did not understand them. It knew patterns, not people. It merely predicted what was statistically plausible, without grasping the habits, needs, or traditions of the community. And they quickly discovered that what is plausible is not necessarily true. There are patterns that persuade by form but do not correspond to reality; suggestions that sound well-founded but have no solid basis.
Faced with these inconsistencies, the community decided to rethink the situation and ask a question too often forgotten: why are we using these systems here?
This is the Zero Question, the question that must come before all others, the one that matters before discussing how or how quickly to use AI: does it make sense to use it in this context, and for what purpose?
With this new perspective, the community came together to decide what made sense to automate and what should remain under human responsibility and effort. They realised they did not need the machine to decide what the bakery should make, or to write their stories. But they did need it to detect early signs of forest fires, now too complex to identify without technological support. They also needed it to monitor water consumption during dry months and to plan repairs to the road linking them to neighbouring communities. Essentially, they wanted a machine not to replace human judgement, but to strengthen their capacity to act together. In short, they understood that the point is not what the machine can decide or do, but what the community truly needs, values, and chooses to take responsibility for. And these decisions are not technical optimisations; they are societal choices. They reflect values, expectations, and conceptions of collective responsibility, shifting the focus from technological capabilities to the purposes we choose to assign to them.
What happened in the village is not just a fairy tale. Similar systems are already being used today to support decisions in public services, in the screening of social support applications, in risk assessment, and in the production of educational or informational content. Many of these decisions appear neutral and efficient, but they embed invisible assumptions about normality, merit, or priority. The kinderopvangtoeslag scandal in the Netherlands (where thousands of families were unjustly flagged as fraudulent by automated systems) dramatically showed how such logics can produce exclusion at scale. In Portugal, although with less visibility, there are already concrete uses of automated and algorithmic systems: in Social Security, in the processing and prioritisation of benefits and requests; in the Tax Authority, in risk analysis systems and the selection of taxpayers for inspection; in the National Health Service, in triage mechanisms, clinical decision support, and resource management, such as call centres and prioritisation systems; and in public administration, through the automation of responses, approvals, and rejections on digital platforms. Even when we do not know that AI is being used, we are already living with its consequences. Most troubling is that many of these impacts are experienced without even being recognised as effects of AI systems, rendering opaque decisions that affect real lives and weakening democratic accountability. And, unfortunately, in many cases the impacts only become visible when someone is excluded, misclassified, or simply not recognised by the model.
So the lesson of the tale is simple. To use these systems responsibly, it is not enough to know the technology or to recognise that it is being used; we must understand the fundamental issues that shape its impact. We need to know what data underpin the answers we receive, what assumptions and values guide the models, what choices determined the algorithms used, and who made those choices. A first essential step is to make visible where and how these systems are used, through public algorithm registers—now adopted in several countries and made mandatory in the Netherlands after hard lessons were learned. These registers are fundamental to institutional transparency, but they are not enough on their own. Knowing that a system exists does not explain how it works, who oversees it, or how its decisions can be challenged. It is therefore equally necessary to clarify who is accountable for the consequences. Trusting AI is never trusting a machine; it is trusting the institutions that regulate it, the people who develop it, and the norms that guide its use. Trust is, above all, a social and political process.
Artificial intelligence does not transform society on its own initiative. What shapes the future are human choices: the data we collect, the goals we define, the incentives we accept, and the values we prioritise. The realisation that AI is not neutral is not theoretical; it is practical. Every AI system reflects choices, explicit or implicit. These choices do not occur only at the level of major policies or infrastructures. They begin with seemingly small decisions: whether or not to automate a process, whether to accept or question a recommendation, whether to delegate or retain human responsibility. It is in the accumulation of these everyday options that AI governance materialises, producing cumulative effects that shape institutions, professional practices, and social expectations. Every decision in its development—from selected data to preferred metrics, from tolerated risks to values deemed relevant—has social and political implications.
Algorithms need rules, but there are many possible ways to define them. And responsible data governance does not require storing everything that has ever been collected. The currently dominant approach is not inevitable; it is a choice. The idea that there are no alternatives sustains a narrative of power, not a technical necessity. Governing AI is therefore governing how we want to organise collective life. Governing AI is not only about containing risks, but about steering its development toward concrete public benefits. The central question is not whether the technology is advanced or efficient, but whether it helps improve living conditions, reduce vulnerabilities, and support fairer collective decisions. Without this criterion, governance is reduced to damage control rather than affirmed as a political and social project. This is not about managing machines, but about managing human decisions that determine the kind of society we want to build.
It is essential to reject the idea that there is a “technological solution” to every social problem. But recognising these limits does not mean rejecting the usefulness of AI. There are contexts in which its use is not only legitimate but socially desirable, opening real possibilities for collective benefit—contexts in which not using AI is itself a form of irresponsibility. In medical and pharmaceutical research, machine learning systems already support the identification of new molecules, the analysis of clinical images, and the development of more targeted therapies. In the environmental and climate domain, predictive models allow us to anticipate fires, floods, or droughts, supporting faster and more informed responses. In these cases, AI does not replace human decisions or political deliberation; it expands our capacity to understand complex systems and to act preventively, especially when the cost of inaction is high.
However, issues such as inequality, sustainability, inclusion, justice, or even democratic cohesion are not solved with algorithms, but with policies, public deliberation, robust institutions, and collective responsibility. All technology, including AI, is embedded in real socio-economic systems that have power asymmetries, structural failures, interests, and conflicts. There are no technological shortcuts to structural problems. No technology can, by itself, fix what is structurally political or economic; those challenges require informed human choices, institutional courage, and sustained social commitment. AI can help, but it cannot replace human action, nor resolve for us the tensions, priorities, and dilemmas that define life in society.
At the same time, the way we use AI introduces new risks. We live today in a culture that idolises efficiency: doing more faster, producing more quickly, automating whenever possible. The promise of generative AI fits perfectly into this narrative—delegating cognitive tasks, producing content in seconds, reducing human effort. But we must ask: what kind of efficiency is this? If efficiency merely means reducing the time needed to produce texts, images, or other outputs, perhaps we are indeed gaining. But if it means replacing the intellectual effort that creates depth, rigour, discernment, and critical reflection, then what we lose is far greater than what we gain. The problem, then, is not efficiency itself, but its disconnection from clear social purposes. When efficiency becomes an end in itself, we risk optimising obsolete processes that should not exist, accelerating decisions that require deliberation, or automating practices that depend on context, responsibility, and human judgement. Clear and comprehensive rules are not an option; they are a basic necessity. Transparency must be a continuous practice, not a decorative statement.
Another important lesson follows: using AI responsibly requires more critical thinking, not less. Oversight, deliberation, and the ability to contest automated suggestions are pillars of democratic resilience. In a context that glorifies immediate efficiency, it is crucial to resist the temptation of automatic processes when they conceal trade-offs or weaken accountability. Technological speed does not replace collective judgement.
For this reason, the most relevant experiences are not necessarily technological, but ones of governance. What we find are not exceptional machines, but solid institutional structures, practices of scrutiny, and the reaffirmation of human agency at the centre. It is institutional design, not technical dazzlement.
The lessons we draw are clear without being simplistic. AI is neither an oracle nor an inevitable force; it is a set of tools shaped by human intention. Human responsibility is not optional, and delegating parts of the decision-making process cannot become abdication. Regulation, far from hindering progress, provides the clarity and stability that sustainable innovation requires. And in the end, what truly matters is not the power of systems, but the responsibility with which we integrate them into society. Artificial intelligence does not transform society on its own initiative. What shapes the future are human choices: the data we collect, the goals we define, the incentives we accept, and the values we prioritise. Governing AI is governing the kind of society we want to build. It is not about managing machines, but about assuming human decisions that shape the future.
At the end of a year of rapid advances, uncertainty, and rising expectations, this reflection shows us something essential: the future of AI is not written. It depends on the frameworks we create, the values we place at the centre, and the collective choices we make. Answering the Zero Question forces us to consider social impact, risks, alternatives, and, above all, values. It forces us to recognise that not everything that can be done should be done; that not everything needs to be automated; that not all problems require AI; and that not all applications bring real benefits.
This responsibility is not only ethical; it is political.
AI is a technology that reconfigures power. Those who control data control part of reality. Those who define models define how that reality is interpreted. Those who determine incentives determine the direction of technology. Governing AI is governing these systems of power; it is governing ourselves. And that requires us to be able to explain not only how systems work, but why they exist, for whom they were made, and who bears their costs—economic, human, and environmental.
Because we cannot ignore impact. Currently, AI depends on energy, water, materials, mineral extraction chains, and physical infrastructure. Digitalisation is not immaterial. Behind every system there is also human labour: data annotators, content moderators, invisible workers who perform repetitive and emotionally exhausting tasks so that we can “play” with chatbots as if everything were automatic and magical. These are people who work long hours, in precarious conditions, to train, clean, and correct what models cannot do on their own. Responsible AI also means this: the duty to integrate environmental and social objectives from the outset—reducing emissions, avoiding excessive consumption, ensuring circularity, protecting those who work in AI’s invisible supply chains, and aligning innovation with planetary limits and global justice. Technology can help address the climate and social crisis, but it can also exacerbate it. Responsibility always begins in design and intention, never in the final patch.
Finally, we must talk about trust. Technology only serves the public interest when society trusts it. Trust does not arise from promises; it arises from practices: transparency, participation, scrutiny, and accountability. Governance is not the opposite of innovation; it is its condition. Clear, values-driven governance enables sustainable innovation.
For AI to contribute to more just, democratic, and sustainable societies, we need three things: human resilience, strong institutions, and citizen participation.
Artificial intelligence will not decide the course of society. We will. And that requires the courage to slow down when necessary, to ask why before asking how, and to build critical resilience in people and institutions. True innovation lies not in faster systems, but in more responsible and more resilient societies.
And as we move forward, perhaps the most important gesture is the simplest: to stop, assess, and decide, together, what technological ecosystem we want to build and what kind of society we want it to serve.
Years later, in the village of my tale, the machine was no longer a novelty. It was there, quietly, working when needed, switched off when it was not. People knew when to use it and, more importantly, when not to. They had learned to ask before automating, to correct before trusting, and to decide together what was worth delegating. Technology had not solved all problems, but it had helped avoid some, anticipate others, and free up time to care for what truly mattered. And so, without grand promises or miracles, the village realised that the future had not been decided by the machine, but by the choices they made in integrating it.
The tale ends here, but responsibility remains, always, in our human hands.
The future of AI will be what we choose it to be.
Comments
Post a Comment