From time to time, I find myself confronted by people who tell me that they feel that generative artificial intelligence is essentially doing the same job of the human mind. Some speak from a position of ignorance, but some who espouse this view in fact say it with complete understanding of the way that these systems work. In essence, the view that they are taking is an extreme version of computationalism - the understanding that the mind is essentially an instruction and information processing machine, and that it can effectively be simulated by a computer system as a consequence. Often, this is presented as a binary choice between computationalism or biological naturalism, arguing that there is something uniquely special about biological creatures (and humans in particular) which facilitates the way that we think, and that it is conceptually impossible to replicate that cognition artificially. I found myself moved to write on this because I’m struck by how radically different an experience of the world a computationalist has than the one I do; in spite of this, I don’t think it’s necessary to believe that it is literally impossible to simulate human cognition in order to strike out the idea that we are simply “thinking machines”, so to speak.
You’re more creative than you realise
For years, we have collectively pondered the question of whether one can separate art from its artist. Today, we are faced with a new one: whether art without an artist carries value or meaning independently. Works unburdened by emotion, or experience, or even by error, created and destroyed at the will of a few lines of code. Those who claim that these works are art will often advance the argument that the creativity of the artist is in the prompt given to the system, and there is probably a degree of truth to that; even if you feed an entire story into such a system, though, the output that you get is guaranteed to be but one of many possible interpretations of the input. When I read “a beautiful, dusky sunset over the city, with a night sky lit by fireworks”, I picture a specific scene; inevitably, you will picture a different one. If you were to place that prompt into Midjourney or a similar system, you and I would see the same image, but it would not be our images. Indeed, this is the entire point: the value-add of these systems is intentionally that they are able to create content beyond that of their input. A generative AI system that produces exactly the creative input that goes into it, pixel-for-pixel, is not a generative AI system - it is a textual interface to Microsoft Paint.
Even beyond simply the interpretation of those words, though, when we create a creative work, what we produce contains more than perhaps even we ourselves sometimes originally realise. There’s of course obvious examples of this in traditional art forms - why the stars in a painted skyline are painted in a particular formation, or why a particular instrument is chosen for a melody - but this is by no means limited to those kinds of creative work that we traditionally think of as creative: it’s in our writing, it’s in our code, it’s in everything we produce. Depressingly few developers these days view their craft as an art form, yet when we write code, we’re frequently embedding our cultural assumptions, our understandings of the way the world works, and our own experiences in ways that frequently aren’t obvious to us, or even to many of our users. The way we handle names is an excellent example of this - Falsehoods Programmers Believe About Names by Patrick McKenzie makes this point very eloquently - where even something which might seem as simple as a “first name” and “last name” field contains myriad cultural and social complications. Sure, you can generate an output which contains a decision on all of those things, and it will most of the time contain the most common answers to all those questions - but the wisdom of the crowd is frequently one of bias and discrimination, especially against the most marginalised groups in any one social context.
“Reasoning” is a creative act, not just retrieval and generation
Generative systems incorporating functionality labelled as “reasoning” are, of course, now popping up. We have seen significant and often entrenched debate around whether the output of these systems emergently behaves sufficiently like reasoning to be useful - see The Illusion of Thinking and The illusion of “The Illusion of Thinking” and similar - but I think these miss the point somewhat. We are still sitting at the very same Chinese room problem as with generative systems as a whole, just staring at it from a different angle: can this thing that produces output that looks like it can X actually X? is still the fundamental question we find ourselves asking, just now X is “think” rather than “speak Chinese” or “understand and provide answers”. These “reasoning” systems are trained to be performant when there is a set of concrete, clear rules to apply to come to an answer, such as in mathematics and some science fields, essentially relying on the wisdom of the consentlessly-scraped crowd; they may even be able to simulate “reasoning” to an extent sufficient to allow them to come up with non-obvious ways to approach some of those problems, such as in the case of agentic systems facilitating interactions with software or people. Yet, way more of the work we do is creative than we realise, and these systems - entirely reasonably - do not have the ability to understand that. A “reasoning” system will not consider the complexity involved in creating first name and last name database fields unless you specifically instruct it to. It will not understand the sociopolitical implications of the “decisions” it “makes”. It will not see where it sits in a power structure. It does not think. It does not know. It only hallucinates, it only generates, and fortuitously those often collide with the truth based on its training data.
Medicine, by way of example, is a field with a reality orders of magnitude less clear-cut than many outsiders see it to be. I frequently read people arguing that healthcare professionals’ sole saving graces from the advance of AI lie in the need for patients to feel cared for by another human being, or in the risk tolerance in healthcare being lower than in other industries, and whilst both of these are excellent points, there is a glaring third challenge here of the deep, complex, multifactorial thinking that goes into clinical decision making. Practicing as a healthcare provider is not a question of gathering a specific set of tests and seeing which are outside of reference ranges, or regurgitating a list of facts by rote - or, at least, for any good provider, it isn’t - but instead involves a gestalt analysis of a patient, their conditions, the pre- and post-test probability of pathology, their circumstances, the context of the system in which they have come to you, their hopes, their worries, and their expectations, among myriad other things. A “reasoning” system based on a large language model cannot achieve that, because it is not actually aware of any of those things, and unless there’s enough people who have listed out that they are relevant in its training dataset, it isn’t going to make those connections. It’s why I find widely-publicised claims such as ”[anaesthesiology] will be gone soon” to be replaced with AI so absurd: they beget a fundamental misconception of either the nature of these systems, or the nature of practice as a healthcare professional, or both.
Automation can’t substitute meaning
Even if we presume that all of these challenges were somehow solvable, though, we find ourselves confronted by the perhaps deeper question of “who does this help”? Dr Julia Kołodko rightly points out that meaningful work is more than just a way to earn a living, but is a significant psychological protective factor for many people: “when we lose the opportunity to contribute in a meaningful, creative way, we don’t just disengage. We suffer.” When we search for efficiency at the expense of creativity, and of understanding, and of experience, as eevee puts it, we internalise the idea that “doing things is worthless”, and that some metric of how quickly we are able to Produce Output is what matters most. I’ve seen this with people I respect: I read an old manager’s blog the other day only to find that the post I was reading was clearly almost entirely the hallucinations of a large language model, ironically talking about the opportunities that AI offers, and I’ve reviewed code that is functional but works in a way so bizarre it would surprise me coming from a first year undergraduate computer science student, which appears to have been heavily produced by generative models. Hell, I’ve even had a colleague ask me for advice before, and then upon getting my advice respond that it was impossible to do the thing I advised because ChatGPT said it wasn’t possible (spoiler alert: it very much was). Yet, it seems that a significant portion of the population either doesn’t care or feels compelled not to admit to caring - and I’m genuinely terrified for what that means for our society as a whole. The widespread adoption of generative AI may have been one of the most societally-damaging technological developments of recent years, not even directly because of the AI itself, but because of the implications it has for our expectations of ourselves and the people around us.
I’ve written before about the deep frustration I feel over the enshittification of the Internet. This goes beyond just that, though: we’ve allowed our technocapitalist view of what it means to exist in the enshittification era to seep into the way we view everything else as a consequence. There’s a fine line somewhere between a technological advance that nobody knew they needed because they didn’t understand what it would do for them, and technological change that nobody actually needed, or that pits one group in society against another. The bridge across that line is the process of enshittification: when Meta randomly turns WhatsApp searches into generative AI prompt engines that nobody asked for, or Google just decides Gemini now has default access to your messages and phone calls even for users who’ve previously opted out, or Microsoft makes how much AI its employees use one of their employee performance indicators, we are training people to view technology as something Imposed Upon Them. It is no longer an aid, a device to make their workflows easier: it is the way things must work because Someone Somewhere Says That Is How It Must Be.
That’s deeply depressing.
A tale of three futures
The way I see it, there’s three main options for how this story develops from here. The first is probably the one I’m least hopeful for in reality, but is perhaps the best: some great innovation comes along and proves me completely wrong, and resolves all of these issues. The problem with this is that, to my mind at least, it would require a paradigm shift in the way that these systems operate; one can tinker with training data to increase the incidence of hallucinations that collide with reality, but that does not fundamentally address the challenge that the system is not capable of per se “understanding” anything, including the difference between truth and lies. Then, we collide with the simple facts of our economic system: time and time again, budget pressures and profit-centric outcome measurements lead to quick and cheap but detrimental options being selected, and the use of generative AI technology perhaps offers the pinnacle of that particular philosophy.
One way that future could perhaps be different, though, is simply through an increasing level of public consciousness that perhaps this whole bubble isn’t all it’s cracked up to be. People are being increasingly exposed to stories of how unreliable this technology can be, and we know that it is leading to them making different purchasing decisions: people are less likely to buy products when proudly advertise “AI” functionality compared to when simply branded as “new technology”. Right now, companies are betting quite a lot of their branding on that changing, and perhaps it will: or, perhaps the view of the public on these technologies gets even more dismal as they are shoved into people’s faces more and more, and users experience for themselves how they can be so utterly damagingly wrong (Apple Intelligence notification summaries, anyone?). There’s only so far the end consumer can be pushed on this before companies are essentially forced to change course from negative public perception - for B2B services in many industries, that point may come around even quicker (even if you’re an early adopter in your personal life, it’s a fair chunk more challenging to convince senior leadership to accept those same risks unless they happen to also be on board that particular hype train).
What worries me most, though, is the final future that could come from all of this - the nothing matters future. This is the future in which the output of generative artificial intelligence technology is accepted as the standard for the quality of our work, and rather than us adjusting it, it adjusts us. We’ve doubtlessly already seen the start of this in some areas: more code is being copy-pasted, and both throughput and stability of software delivery have been negatively impacted, since the advent of generative coding. You might say “but this doesn’t matter so much, it’s only software” - and I’d say that that betrays a pretty dismal view of the software profession and what we should aspire to - but more importantly, this is just the start: imagine a future where local newspapers end up publishing generated articles without human review, for instance, or where lecturers let generative AI write their teaching and assessment materials without checking them. Except these things are already happening - this world is already here. It’s easy to see why: these people are often both time- and resource-poor, and when we systematically prioritise volume of output over quality of output, these kinds of consequences follow naturally. The logical next steps, though - charities using AI to generate end-of-year reports because it’s cheaper than hiring someone in to do them, or small media organisations generating news reporting from LLM synthesis of social media output because news doesn’t make money, or classroom teachers presenting PowerPoints that are constructed using LLMs so they have more time to mark work - all would probably be bandied around as breakthroughs, all feature people who have perfectly legitimate reasons to want more time in their lives, and all end up with potentially catastrophic consequences for our society when things go wrong downstream.
we're throwing an entire generation of non-technical members of the public head-first into the Chinese room problem, without them understanding what the Chinese room problem is. and, to be clear, that's not on them; that's on us.
I fear our road to an LLM-induced enshittification of the real world may transpire to be neatly paved with the calendars and performance reviews of everyday folks with the best of intentions. If and when that happens, though, we can’t be blaming them: after all, all this happens because those of us in technology let our standards drop. Instead, right now, we should be introspecting about how we might let such a thing come about - and what it might say about our goals as a society that we have that risk at all. The deeper we dig ourselves into this hole, the harder it will become to scramble out.

