Is AI the new Tulip Mania?
In the 1630s, the Dutch Republic experienced what is often described as the first recorded speculative bubble: Tulip Mania. Prices for rare tulip bulbs soared to extraordinary levels before collapsing abruptly in 1637.
Nearly four centuries
later, massive capital flows into AI chips, data centers, and foundation models
raise a familiar concern: Is AI our tulip moment?
Tulips were not
useless. They were beautiful, desirable, and culturally significant. Their
value stemmed from rarity and difficulty of reproduction; scarcity became a
signal of importance. A simple belief system took hold: rarity meant value,
rising prices confirmed significance, and ever-larger contracts promised future
wealth. The logic was circular, but powerful.
When the fever broke,
little remained beyond financial losses and a cautionary tale. This is the
defining characteristic of a pure bubble: capital concentrates around an asset
whose price reflects belief more than structural contribution to societal transformation.
Today’s AI expansion
revolves around compute infrastructure (GPUs, hyperscale data centers,
foundation models) and the pursuit of scale within an intensely competitive
global ecosystem. Companies such as NVIDIA have experienced extraordinary
growth, while governments increasingly frame AI as critical infrastructure.
Unlike tulips, AI
systems are embedded in software development, customer service, drug discovery,
logistics, finance, and education. Their applications are real. The risk is not
that AI lacks potential, but that we may be anchoring its promise to the wrong
variable: scale.
That is, the implicit
doctrine of the current AI race follows a familiar ‘tulip’ pattern: the largest
model will win, the most compute will dominate, scale equals intelligence. This
is not an engineering law; it is a belief system. We have shifted from
observing that scale can improve performance to assuming that scale itself
constitutes the breakthrough.
The consequences are
visible: concentration of power in a small number of actors, escalating
environmental demands, education systems prioritizing prompt literacy over deep
competence, labor strategies oriented towards replacement rather than
augmentation, and governance that lags behind infrastructure expansion.
Infrastructure accumulation is mistaken for intellectual progress.
Moreover, even where
measurable task-level gains are evident, these do not automatically translate
into structural productivity growth, institutional innovation, democratic
resilience, or expanded human capabilities. If scale were synonymous with
transformation, we would expect to see sustained acceleration in economy-wide
productivity. Yet national statistics show no clear AI-driven rise in GDP per
hour worked. Explanations abound: diffusion takes time; benefits concentrate in
specific sectors; measurement tools miss qualitative improvements; returns
accrue mainly to capital rather than labor. These may be valid. But they echo
earlier episodes, including the productivity paradox of the early IT era. When
gains are captured narrowly within specific geographies, sectors, or financial
systems, valuations can inflate without systemic advancement. Mania generates
transactions; transformation requires institutional redesign.
The more subtle bubble
may be forming in education and labor. When educational systems overemphasize
short-term tool proficiency while underinvesting in critical thinking,
governance, ethics, and socio-technical understanding, they align human
development with the hype cycle rather than long-term societal needs. When
labor markets prioritize substitution over augmentation and fail to build
serious reskilling pathways, inequality deepens and institutional resilience
weakens.
Tulips were beautiful.
AI models are powerful. Both can attract valuation based on expectations of
future dominance secured through size. The danger is not that AI is empty. The
danger is conflating bigness with intelligence and centralization with progress.
Tulip Mania collapsed
when belief shifted. AI will not disappear, but belief can migrate from
exuberance to disillusionment with equal speed. If today’s investments continue
to privilege scale over institutional redesign, dominance over distribution,
and infrastructure over human capability, the reckoning will not be technological
failure. It will be social and economic distortion: concentrated power, widened
inequality, brittle institutions, and public trust eroded by unmet promises.
The choice is not
between hype and rejection. It is between embedding AI within democratic,
human-centered socio-technical systems, or allowing a scale-driven paradigm to
define progress on its own terms. The first demands restraint, governance, and
deliberate institutional change. The second accelerates until correction is
imposed from the outside. Bubbles do not end because technologies vanish. They
end because societies refuse to sustain the imbalance.
Comments
Post a Comment