AI consciousness is a red herring in the safety debate
(published in the Guardian on 6 January 2026)
The concerns expressed in this article in The Guardian, that advanced AI systems might one day resist being shut down deserves careful consideration, but treating such behaviour as evidence of consciousness is dangerous: it encourages anthropomorphism and distracts from the human design and governance choices that actually determine AI behaviour.
Many systems can protect their continued operation. A laptop’s low-battery warning is a form of self-preservation in this sense, yet no one takes it as evidence that the laptop wants to live: the behaviour is purely instrumental, without experience or awareness. Linking between self-preservation to consciousness reflects a human tendency to ascribe intentions and feelings to artefacts and not any intrinsic consciousness.
Crucially, consciousness is neither necessary nor relevant for legal status: corporations have rights without minds. If AI needs regulation, it is because of its impact and power, and to locate human accountability, not because of speculative claims about machine consciousness.
The comparison with extraterrestrial intelligence is even more misleading. Extraterrestrials, if they exist, would be autonomous entities beyond human creation or control. AI systems are the opposite: deliberately designed, trained, deployed, and constrained by humans, with any influence mediated through human decisions.
Underlying all this is a point the article largely overlooks: AI systems are, like all computing systems, Turing machines with inherent limits. Learning and scale do not remove these limits, and claims that consciousness or self-preservation could emerge from them would require a, currently lacking, explanation of how subjective experience or genuine goals arise from symbol manipulation.
We should take AI risks seriously. But doing so requires conceptual clarity. Confusing designed self-maintenance with conscious self-preservation risks misdirecting both public debate and policy. The real challenge is not whether machines will want to live, but how humans choose to design, deploy, and govern systems whose power comes entirely from us.
AI AIrisk AGI AIgovernance ResponsibleAI superintelligence

Comments
Post a Comment