India AI Impact Summit: The Optics of Inclusion
The India AI Impact Summit from 16-20 February 2026 in New Delhi was unlike the closed-door gatherings in Bletchley or the more government-centric forum in Paris. Around 300,000 people from every background walked through the doors, students, developers, civil servants, teachers, small business owners, a crowd far diverse than the usual technocrats and CEOs.
This scale matters. The Global South was not an add-on; it was central. But the noise and spectacle around AI getting democratized isn’t the same as power and governance actually being redistributed.
Many sessions, many side-events, many stages. So much “show and tell,” so many demos, panels, and announcements, and yet far less time and space for decisions, commitments, and concrete steps forward.
The summit looked open. It felt open. The crowds were real, the diversity real. But power did not move with the crowd. It stayed in closed government tracks, invitation-only CEO sessions, and side-events where actual decisions were shaped over a glass of wine. Broad participation, narrow influence. And this is what concerns me: was openness used as theatre? Inclusion as optics? A public façade that leaves decision-making structures untouched is not just insufficient, it is dangerous, when governance of AI is at stake.
From behind those closed doors, and yet for all of us to see, a couple of things transpired: tech bros refusing to join hands in the show of unity pushed by Prime Minister Modi. And then there were the quotes from industry leaders that, without exaggeration, should unsettle anyone watching how this conversation frames the future. Quotes that reveal a lot and I fear for the impact on those policymakers in the room who hold so much power, and still understand so little:
At one session, Sam Altman responded to criticism about AI’s energy footprint by saying: “It takes like 20 years of life and all of the food you eat during that time before you get smart.” With this he framed the comparison of AI training costs and energy use as unfair, because humans take decades to develop, “and all of the food you eat during that time.”
This is not a clever technical point. It is a framing that reduces human development to calories and energy and implicitly argues that human “costs” are something to be measured the same way as bytes and kilowatt-hours. It suggests that concern about resource use is misplaced, because, by this logic, humans themselves are an inefficient product. That’s a framing choice, not a technical necessity.
And then there was Yoshua Bengio, routinely cited as a leading voice on safety, but whose framing at this summit wasn’t about aligning high-risk systems with human values, instead, he said AI systems should make predictions “without any goal” because goals introduce bias. This line comes from multiple attendee recordings of his remarks at the summit, where he argued that assigning explicit objectives to AI biases outcomes and pushes the conversation toward optimal prediction without purpose. What does this actually mean? It sounds technical, but the implied logic is avoid defining values up front. That’s a distraction from the urgent political question of who gets to decide what “good outcomes” are in the first place. Saying “no goals” sounds safer, but if no goals are specified, then powerful, opaque systems are left to default, design, incentives and those incentives are rarely democratic, equitable, or accountable. These quotes matter, and share a common framing effect:
-
One equates human life and development to a mere resource cost, erasing social, ethical, and democratic meaning.
-
The other treats goals as a technical nuisance rather than the core of ethical alignment.
Neither quote was a neutral technical observation. Both are narrative moves that push responsibility onto users, markets, and society instead of acknowledging that values, politics, and governance are built into every design choice. This is precisely the dynamic I describe in "The AI Paradox": If we are not explicit about how we frame AI, we risk building policy on hidden assumptions about its nature, its purpose, and who is entitled to shape its future.
This is dangerous because policymakers can easily take these framings literally and feel reassured that AI risk is either a technical inefficiency (just compare energy inputs) or something that goes away if we “let the models just predict.” They don’t. And if we allow those framings to stand unchallenged, we risk normalising a politics of abdication at precisely the moment when responsibility is most needed.
******
Now that the dust has settled, this what this summit actually revealed: Participation ≠ power. Opening the doors for huge numbers is necessary, but it doesn’t create leverage or accountability. Openness was celebrated as impact. It isn’t. Real impact would look like:
-
Governance frameworks with teeth, not voluntary pledges.
Data sovereignty protections that prevent extraction economies.
-
Public investment in independent research: not just adoption and rollout of products developed elsewhere.
-
Labour strategies co-designed between governments, workers, and communities.
India’s summit showed that people care about AI. And this matters. A lot. But interest without institutional leverage is spectacle. And when key speakers use metaphors that devalue human development or deflect from explicit goal setting, it shapes policy debates in precisely the wrong direction.
Open participation should lead to open power structures. Right now, it does not.
Comments
Post a Comment