Computing Science Has an Identity Problem

Computing Science Has an Identity Problem


In an earlier blog post, I asked whether the traditional foundations of computing science, algorithms, systems, formal methods, still constitute the core of the discipline or have become its infrastructure. That question is not abstract. It connects directly to what is happening in lecture halls and admissions offices, as seen at least in the US, where enrollment in CS program has seen steep decline in last years admissions.

The standard explanation is fear: students worry that AI will automate away programming jobs, that entry-level positions are disappearing, that the degree no longer guarantees what it once did. There is truth in this. Tech layoffs, a saturated junior developer market, and the visible capabilities of code-generating AI have made the traditional CS value proposition less convincing.

But I think the enrollment decline is a symptom of something deeper. It is not simply about job prospects. It is about a discipline that has lost clarity about what it is for.

For the past two decades, computing science positioned itself, implicitly or explicitly, as the discipline of building software. Curricula were organized around programming languages, data structures, software engineering, and systems architecture. Departments measured success in placement rates at technology companies. Students enrolled because the pathway from CS degree to developer salary was well-trodden and reliable.

But this approach was always, at best, a reduction. The true meaning of computing science is the understanding computation as a mode of inquiry: what can be formalized, what can be modeled, what are the limits and consequences of those formalizations. But the reduction to building software was profitable, and so it persisted.

Now that AI tools can generate boilerplate code, debug simple programs, and produce passable software prototypes, the reduced version of CS looks vulnerable. Students are not wrong to sense this. If computing science is primarily about writing code, and code generation is increasingly automated, then the discipline's value proposition narrows further with each model release.

The irony is that what students are fleeing toward, dedicated AI and data science programs, often provides even less of the foundational understanding that would make them genuinely capable. As I have written before, knowing only AI means not really knowing AI. The same principle applies to computing at large. A field that mistakes tool proficiency for understanding produces graduates who can operate systems but cannot evaluate, critique, or redesign them.

The automation fallacy

The narrative driving current anxiety rests on an assumption worth examining: that computing can be fully automated, that the human contribution to computational systems is primarily labor, and that this labor is now substitutable.

This conflates programming with computing science. Programming is an activity. Computing science is a scientific discipline. The discipline encompasses formal reasoning about what is computable, the design of abstractions that structure complex systems, the modelling of real-world phenomena through computational representations, and, increasingly, the analysis of how those representations interact with institutions, norms, and human behaviour.

None of this is automated by current AI systems. What is automated are specific, well-bounded tasks within the broader practice. These are significant capabilities, but they are not the discipline. The belief that computing can be fully automated mirrors the broader pattern I described in my Tulip Mania post: the conflation of scale with intelligence, of output with understanding. Automation of coding tasks does not render computing science obsolete. It renders a narrow conception of computing science obsolete. That is a different claim.

What the discipline has always been, and forgotten

I think that there is a historical amnesia at work here. Computing science did not begin as a vocational programme for software developers. It began as a branch of mathematical and philosophical inquiry into the nature and limits of formal systems. Turing's foundational work was not about building products; it was about what could and could not be decided by mechanical procedures. Early computing science was deeply entangled with logic, linguistics, cognitive science, and systems theory.

Over time, as the commercial applications of computing grew, the discipline's self-understanding shifted. It became increasingly defined by its outputs, software, systems, applications, rather than by its questions. The intellectual breadth that once characterised the field was gradually replaced by a skills-oriented identity that made departments legible to industry but impoverished as sites of inquiry.

But what we need now is not nostalgia for the era of Turing and von Neumann, but a recognition that computing science has always been more than programming, and that its current crisis is in part the consequence of having forgotten this. 

Computing as socio-technical discipline

I have argued that computing science is increasingly a socio-technical discipline: that its central questions are no not only about correctness and efficiency but about impact, power, and institutional design. Computational models are not neutral. They encode assumptions, simplify complexity, and shape decisions. They define who is seen, what counts, and how resources are distributed.

This is not a fashionable addition to an otherwise stable curriculum. It is an intellectual requirement imposed by the nature of what computational systems do. When algorithms determine credit access, structure public discourse, allocate medical resources, and inform judicial decisions, the question of what computing science is about cannot be answered by pointing to data structures and sorting algorithms.

A computing science education adequate to the present moment must develop formal and technical competence, but also the capacity for normative judgment: what should this system do, and to whom is it accountable? It must cultivate institutional reasoning: how does this system interact with governance, regulation, and organizational structures? And it must foster reflexivity: what assumptions does this model encode, and what are their consequences?

These are not side issues, they are the hard questions the discipline has been avoiding.

The academic dimension

There is a compounding problem on the academic side, which I addressed in my post on speed and knowledge. The same AI tools reshaping student expectations are reshaping academic practice. When a professor can use AI to produce a draft literature review in minutes, the incentive to invest months in a graduate student's intellectual development weakens. When funding cycles reward throughput, the slow formation of a researcher becomes a liability.

If departments respond to declining interest by further automating teaching, reducing mentorship, and optimizing for efficiency, they will accelerate precisely the hollowing out that makes the discipline less attractive. Students sense when a field treats them as pipeline inputs rather than as minds to be formed.

The alternative is to treat the current disruption as a catalyst for much needed institutional redesign. This means defending the slow work of education, the formation of judgment, curiosity, and critical capacity, not because it is traditional, but because it is what produces people capable of governing the computational systems they build. The value of a computing science education lies not in the code a graduate can write but in the questions they know to ask.

The choice

Computing science is not dying. But, fortunately, I would say, a particular version of it, narrowly conceived, instrumentally justified, and increasingly automatable, is losing its hold. What replaces it is not yet settled.

The worst outcome would be a proliferation of shallow AI programmes that train students to use tools without understanding what those tools do, how they fail, or whose interests they serve. The best outcome would be a discipline that reclaims its breadth: rigorous about foundations, serious about impact, and honest about the limits of automation.

The current moment is not a threat to computing science. It is an invitation to remember what computing science was supposed to be. The question is whether we are willing to hear it.

Comments