AI and Democracy: Reflections from Copenhagen
Today I spoke at the ADD Project summit on AI and Democracy. The conversations were rich and at times uncomfortable. Good. That is what democracy requires. Here are some reflections.
Technology does not serve people
I started my presentation with a paraphrase Bruce Schneier: "If you think technology is the solution to your problems, you don't understand technology, nor your problems".
We have built a digital society without ever seriously asking whether we wanted one without human contact points. We have been brainwashed about what "convenient" means and what it costs. But AI is not neutral. It embodies human choices. And those choices, about data, incentives, design, governance, shape outcomes in ways that can advance human dignity or undermine it.
Innovation needs a democratic dimension
We talk endlessly about innovation, but rarely about the conditions under which it serves democracy rather than eroding it. Responsible AI is not a constraint on innovation. It is innovation: technological, organizational, regulatory, social. Ethics and regulation are not the opposite of innovation. They are stepping-stones for it. But innovation is not using existing technology "as is." And it is certainly not just technological. We need innovation across disciplines, sectors, and actors.
We failed to put technology in its place
This failure is structural, rooted in how the technology ecosystem is constructed. As other speakers noted, AI-generated content is already being used by mainstream governments, often without adequate scrutiny. As academics, we need to understand the FOMO felt by governments. They are under real pressure to act, and our role should be to help them act wisely through science diplomacy and evidence-based policy, not to lecture from the sidelines.
The burden of responsible action must fall on those with agency and capacity. Incentives need to be aligned to control. And yes: tax those who extract value from AI.
The EU AI Act: a first step, not a finish line
The AI Act was meant to be a first iteration, not a definitive success. If it were meant to be perfect, it should have started very small. The point is to build trust, not break it. Developing regulation is also innovation, and can avoid becoming "legal spaghetti." But governance is more than regulation: it includes standards, organizational structures, assessment frameworks, awareness, and civic participation.
Make sure people are really seen
The first choice is to ensure people are genuinely seen. Not as data subjects, not as users, but as people. Not all data is equal, and we need policy instruments around what to keep, what to abstract, what to forget. AI does not happen to us. It is designed. Outcomes are shaped by choices, data, incentives, and governance. The current approach is not inevitable, nor is it technically the best.
As always, my message is: ask Question Zero: should AI be used here at all?
We cannot fully control Big Tech. We need alternatives.
Regulation can only go so far. Europe needs genuine alternatives. But beyond that, we need something more fundamental: the willingness to have a life independent from technology. Or better said: simply, to have a life.
This extends to how we organize content and information. Stop posting in one platform. Be where people are, create network effects not dependent on a single gatekeeper. Digital sovereignty means having choices. Not just conditioning the supply side, but designing the demand side, building volume across Europe.
Rediscovering curiosity and disagreement
We need to refind our curiosity. The algorithmic environment rewards certainty and speed, not exploration and nuance. Being with other people, genuinely, takes time and practice. Learning to agree and to disagree is a skill that atrophies without exercise.
AI's greatest paradox: the smarter it gets, the more we need human wisdom. Governing AI is governing ourselves.
Comments
Post a Comment