Why ChatGPT Doesn't Actually Understand Anything
The gap between AI and human understanding isn't about processing power - it's about our capacity for Care.
As of the time of this book’s writing in 2024, machine learning algorithms such as ChatGPT have advanced to the point where their responses to questions can correspond to an impressive degree with how human beings use written language. ChatGPT’s ability to incorporate context in conversationally appropriate ways makes interacting with these models feel uncannily natural at times. Of course, training an AI language model to interact with humans in ways that feel natural is far from an easy problem to solve, so all due credit to AI researchers for their accomplishments.
Yet in spite of all this, it’s also accurate to point out that artificial intelligence programs don't actually understand anything. This is because understanding involves far more than just responding to input in situationally appropriate ways. Rather, understanding is grounded in fundamental capacities that machine learning algorithms lack. Foremost among these is a form of concernful absorption within a world of lasting consequences; i.e., capacity for Care. To establish why understanding is coupled to Care, it will be helpful to explore what it means to understand something.
To understand something means to engage in a process of acquiring, integrating, and embodying information. Breaking down each of these steps in a bit more detail : (1) Acquisition is the act of taking in or generating new information. (2) Integration involves synthesizing, or differentiating and linking, this new information with what one already knows. (3) Embodiment refers to how this information gets embedded into our existing organizational structure, informing the ways that we think and behave. What’s important to note about this process is that it ends up changing us in some way. Moreover, the steps in this sequence are fundamentally relational, stemming from our interactions with the world.
While machine intelligence can be quite adept at the first stage of this sequence, owing to the fact that digital computers can accumulate, store, and access information far more efficiently than a human being, it’s in the latter steps that they fall flat in comparison to living minds. This is because integration and embodiment are forms of growth that stem from how minds are interconnected to living bodies. In contrast, existing forms of machine intelligence are fundamentally disembodied, owing to the fact that digital computers are organized around wholly different operating principles than that of living organisms.
For minds that grow out of living systems, interconnections between a body and a mind, and between a body-mind and an environment, is what allows interactions with Reality to be consequential for us. This is an outcome of the fact that our mind’s existence is sustained by the ongoing maintenance of our living bodies, and vice versa. If our living bodies fail, our minds fail. Likewise, if our minds fail, our bodies will soon follow, unless artificially kept alive through external mechanisms.
Another hallmark of living systems is that they’re capable of producing and maintaining their own parts; in fact, your body replaces about one percent of its cellular components on a daily basis. This is evident in the way that a cut on your finger will heal, and within a few days effectively erase any evidence of its existence. One term for this ability of biological systems to produce and maintain their own parts is autopoiesis (a combination of the ancient Greek words for ‘self’ and ‘creation’).
The basic principles behind autopoiesis don't just hold true for your skin, but for your brain as well. While the neurons that make up your brain aren’t renewed in the same way that skin or bone cells are, the brain itself has a remarkable degree of plasticity. What plasticity refers to is our brain’s ability to adaptively alter its structure and functioning. And the way that our brains manage to do this is through changes in how bundles of neurons (known as ‘synapses’) are connected to one another.
How we end up using our mind has a direct (though not straightforward) influence on the strength of synaptic connections between different regions of our brain; which in turn influences how our mind develops. Accordingly, this is also the reason why the science fiction idea of ‘uploading’ a person’s mind to a computer is pure fantasy, because how a mind functions is inextricably bound with the network of interconnections in which that mind is embodied.
This fundamental circularity between our autopoietic living body and our mind is the foundation of embodied intelligence, which is what allows us to engage with the world through Care. Precisely because autopoietic circularity is so tightly bound with feedback mechanisms that are inherent to Life, it’s proven extraordinarily challenging to create analogues for this process in non-living entities. It’s yet to be demonstrated whether or not autopoietic circularity can be replicated, even in principle, through the system of deterministic rules that governs digital computers.
Furthermore, giving machine learning models access to a robotic ‘body’ isn’t enough, on its own, to make these entities truly embodied. This is because embodiment involves far more than having access to and control of a physical body. Rather, embodiment is a way of encapsulating the rich tapestry of interconnections between an intelligence and the physical processes that grant it access to a world (keeping in mind that everything that your body does, from metabolism to sensory perception, is a type of process).
For the sake of argument, however, let’s assume that the challenges involved in the creation of embodied artificial intelligence are ultimately surmountable. Because embodiment is coupled to a capacity for Care, the creation of embodied artificial intelligence has the potential to open a Pandora’s box of difficult ethical questions that we may not be prepared for (and this is in addition to AI’s other disruptive effects). Precisely because Care is grounded in interactions having very real consequences for a being, by extension this also brings with it a possibility for suffering.
For human beings, having adequate access to food, safety, companionship, and opportunities to self actualize aren’t abstractions, nor are they something that we relate to in a disengaged way. Rather, as beings with a capacity for Care, when we’re deprived of what we need from Reality, we end up suffering in real ways. Assuming that the creation of non-living entities with a capacity for Care is even possible, it would behoove us to tread extraordinarily carefully since this could result in beings with a capacity to suffer in ways that we might not be able to fully understand or imagine (since it’s likely that their needs may end up being considerably different than that of a living being).
And of course, there’s the undeniable fact that humanity, as a whole, has had a rather poor track record when it comes to how we respond to those that we don’t understand. For some perspective, it’s only relatively recently that the idea of universal human rights achieved some modicum of acceptance in our emerging global society, and our world still has a long way to go towards the actualization of these professed ideals. By extension, our world’s circle of concern hasn’t expanded to include the suffering of animals in factory farms, let alone to non-living entities that have the potential to be far more alien to us than cows or chickens. Of course, that’s not to imply that ‘humanity’ is a monolith that will respond to AI in just one way. Rather, the ways that beings of this type will be treated are likely to be as diverse as the multitude of ways that people treat one another.
And this is assuming that the obstacles on the road to embodied artificial intelligence are surmountable, which is far from a given. It could very well be that the creation of non-living entities with a capacity for understanding is beyond what the axioms of what the rules of digital computation allow for. And that apparent progress towards machine understanding is analogous to thinking that one has made tangible progress towards reaching the moon because one has managed to climb halfway up a very tall tree. Yet given the enormity of the stakes involved, it’s a possibility that’s worth taking seriously. For what it’s worth, we’ll be in a much better position to chart a wise course for the challenges that lie ahead if we approach it with a higher degree of self understanding. Which brings us back to the guiding purpose behind the journey that we’re undertaking. Namely, that more epistemic awareness around how our minds work can help us navigate our world in more compassionate and productive ways.