What would it mean for AI to have a soul?

By Brandon Ambrosino,Features correspondent
Alamy Science fiction - including the series Westworld - has long explored whether robots could become conscious (Credit: Alamy)Alamy
Science fiction - including the series Westworld - has long explored whether robots could become conscious (Credit: Alamy)

To ask whether it’s even possible, we first must understand and define what a soul actually is, argues Brandon Ambrosino.

Siri, do you believe in God?

“Humans have religion. I just have silicon.”

Siri, do you believe in God?

“I eschew theological disquisition.”

Siri, I insist, do you believe in God?

“I would ask that you address your spiritual questions to someone more qualified to comment. Ideally, a human.”

She – is it a she? – has a point: artificial intelligences (AI) like Siri are less situated than humans to answer questions about religion and spirituality. Existential angst, ethical inquiries, theological considerations: these belong exclusively to the domain of Homo Sapiens.

Or so we assume.

But some futurists and tech experts predict a not-so-distant future in which AI, having achieved a certain indistinguishability from humans, will be truly intelligent. At that point, they claim, AI will experience the world in ways not too unlike the ways that we experience it – emotionally, intelligently, and spiritually. 

When that day comes, I’ll have a new question for her. “Siri, do you have a soul?”

Read more:

A consideration of AI’s religious status can be found in some of the earliest discussions of modern computing. In his 1950 paper ‘Computing Machinery and Intelligence’, Alan Turing considered various objections to what he called “thinking machines.” The first objection was theological:

Thinking is a function of man's immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

Turing confessed he was “unable to accept any part of this” objection, but because the religious imagination did and still does loom large in the minds of the popular public interacting with his ideas, he thought it necessary to answer the objection. The argument, he says, “implies a serious restriction of the omnipotence of the Almighty … should we not believe that He has freedom to confer a soul on an elephant if He sees fit?”

Getty Images If a powerful deity could give an elephant a soul, wondered Alan Turing, then why not AI? (Credit: Getty Images)Getty Images
If a powerful deity could give an elephant a soul, wondered Alan Turing, then why not AI? (Credit: Getty Images)

But plenty of religious people think elephants, as well as every other non-human creature, lack souls, and therefore could never be religious. These people seem to take their own souls for granted. Perhaps they shouldn’t.

“Soul” is a term that most of us instinctively know what we mean when we hear and use it, but have trouble talking through exactly what it is. At the bottom of the debate over what exactly a soul is, is a fundamental question about whether human beings are merely physical beings or a mix of physical and… something else.

Socrates argues that the soul is the element that ‘when present in a body, makes it living’

Many lay conversations about the soul are informed – whether we are aware of it or not – by a dualism inherited from Greek philosophy. In Plato’s Phaedro, for example, Socrates argues that the soul is the element that “when present in a body, makes it living.” After death, the soul is “released from the chains of the body.” The soul, then, is the animating principle of humans, the stuff inside us that propels us to certain ends.

While the ancients needed this kind of soul-talk to explain mysterious facets of the human person (motivation, for example), today we are able to explain these in material terms. “As a neuroscientist and psychologist, I have no use for the soul,” writes George Paxinos of the non-profit research institute Neuroscience Research Australia. “On the contrary, all functions attributable to this kind of soul can be explained by the workings of the brain.”

Likewise, Phillip Clayton at Claremont School of Theology in Claremont, notes that while “talk of the functions that were once ascribed to the soul is valuable”, those functions can now be studied by scientists.

According to this perspective, “the soul” is unnecessary to explain why humans function the way they do.

Alamy Science fiction - including the series Westworld - has long explored whether robots could become conscious (Credit: Alamy)Alamy
Science fiction - including the series Westworld - has long explored whether robots could become conscious (Credit: Alamy)

Here, religious people might give pause, protesting that the soul, created directly by God, is theologically indispensable. They might point to one of the many verses from the Bible that talk about the “soul,” one of which is the creation narrative from the Book of Genesis.

“Then the Lord God formed man of dust from the ground, and breathed into his nostrils the breath of life; and man became a living being.”

Some translations use the word “soul” instead of “living being,” which highlights the view that the physical body of Adam was animated by something that wasn’t his physical body: namely, the breath of life.

But this story doesn’t actually justify a mind/body dualism, as some religious people might claim. The word for “living being” is nefesh, which refers to a created being in her totality. Interestingly, before it’s applied to Adam, nefesh is used four times to describe animals – which certainly requires some explaining from the anthropocentrists among us. Indeed, the Hebrew Bible doesn’t have separate terms for body and soul – each person is a whole living being. When translators set out to create the the Greek Old Testament, they (mis)translated the word nefesh as psyche, which carried a dualistic connotation as something indivisible from the physical body. This, coupled with the fact that early Christians were reading the New Testament through an increasingly Greek philosophical lens, set the church down a path that led it to baptise a soul/body dichotomy.

This dichotomy continues to wield enormous influence in the popular Western imagination. Most Christians throughout the world believe they have a soul created by God, and that this soul is more important than their body and will outlive it, perhaps into eternity.

But this way of thinking about the soul, as a thing, “has to go”, says Clayton. “There’s no place in science for substances – the notion of substance is metaphysical, not empirical. So science can’t study it.” 

Souls in action

Let’s not throw the baby out with the bathwater, though. It’s possible that we might retain some kind of notion of “soul” after some serious reconsideration.

Warren S Brown, psychology professor at Fuller Theological Seminary and former postdoctoral fellow at UCLA’s Brain Research Institute, has been thinking about this for some time. Soul, to him, is “not as an essence apart from the physical self, but the net sum of those encounters in which embodied humans relate to and commune with God (who is spirit) or one another in a manner that reaches deeply into the essence of our creaturely, historical, and communal selves.”

On this view, soul is not something immediately created by God, but an emergent property. We say that a property is emergent when it’s present in a complex organism but not the parts that make it up. As PW Anderson put it in his famous 1972 paper ‘More is Different’, “the relationship between a system and its parts is a one-way street.” In other words, properties may emerge in complex systems – and the human is one such system – even though they’re not present when we “zoom in” on the different systems that make it up.

Getty Images Some philosophers argue that it is time to move away from the idea of the soul as an “entity” (Credit: Getty Images)Getty Images
Some philosophers argue that it is time to move away from the idea of the soul as an “entity” (Credit: Getty Images)

Brown and many philosophers have used the term “nonreductive physicalism” to discuss emergent properties in humans, which they argue are “not reducible to the properties of its elemental constituent parts such as molecules, cells, neurons, neural systems, the brain.”

This is a scientific way of saying something I once heard a Christian pastor say: God has made us more than what God’s made us from.

Christian philosopher Nancy Murphy, who has co-authored several works with Brown, takes a similar view, claiming that we’ve been misled into thinking words like “mind” and “soul” correspond to things in themselves. When we say a person is intelligent, she says, we mean “that the person behaves or has the disposition to behave in certain ways; we do not mean to postulate the existence of a substance [called] intelligence.” We might do the same thing with the concept of “soul.”

In other words, you don’t have a soul (noun) – you soul (verb).

This linguistic shift is well at home in contemporary science, which has moved away from the language of substances to the language of process; from things-in-themselves to dynamic-systems-in-movement. In the words of Fritjof Capra, author of The Tao of Physics, “Whenever we look at life, we look at networks.”

Simply switching “soul” to a different part of speech, however, doesn’t get us any nearer to figuring out what mystery it names. If we want to ask whether AI can ever soul like we do, we first need to figure out how we ourselves soul. There are a few ways to think about this.

One way, according to Samuel Kimbriel, editor of The Resounding Soul: Reflections on the Metaphysics and Vivacity of the Human Person, is to talk about desire – and to understand why, we have to return to the Greeks again.

According to Aristotle, to “soul” is to be internally moved to accomplish what you desire

For Kimbriel, the mystery of soul lingers somewhere between or above our idea of "noun" and "verb." As he says, for Aristotle, the word soul primarily picks out things that are capable of moving themselves. A tree, for example, can change itself from a seed into an oak. This is the lowest level of soul for Aristotle: entities which can nourish themselves and reproduce. The second level of soul, which presupposes and builds upon the first, is the sensitive one, and includes all animals with sense perception. The third level is the rational soul, the ability to engage in abstract thought, which Aristotle limits to humans.

Basic to all three of Aristotle’s notions of soul is an internal movement toward a specific end. This is what it means to soul: to be internally moved to accomplish what you desire.

“To say that a being has a soul is to say that it is not simply moved from outside, but is also capable of moving itself,” says Kimbriel. "A being can move itself because it wants something and these wants make sense, they have structure.” 

Getty Images Can the complexity of human faith ever be replicated in artificial intelligence? (Credit: Getty Images)Getty Images
Can the complexity of human faith ever be replicated in artificial intelligence? (Credit: Getty Images)

In Aristotle’s thought, the world is structured around an unmoved prime mover, which both sets everything in motion and acts are a lure. “The reason everything else moves is because it desires that one thing,” says Kimbriel.

Building on Aristotle’s thought, 13th-Century theologian Thomas Aquinas says the thing all creatures desire is the good, or their “due end.” All creatures, whether they’re aware of it or not, are moving toward their due end either by an inward motivating principle or by their knowledge of that principle. “Directionality,” for Thomas Aquinas, is writ large across the created world.

Any discussion of human souling requires a careful consideration of what it means to move toward the goods we desire. And to have that discussion, we need to keep the focus on communities, not individuals. Brown thinks relationality is one of the most important subtexts of “soul” worth preserving because it names the “almost palpable experience of the moment of engaging another person.”

No one souls in isolation. We soul in communities as we seek to maximise and safeguard the potential for human flourishing – the common good. Souling, then, is not simply an emergent biological property, but a social one.

To soul or not to soul

Let’s try to formulate a definition. To soul is to understand that we share certain desires with our fellow humans; that it’s in our best interests and to work collectively to satisfy those desires in ways that promote the maximal amount of human flourishing; that there is a mysterious and unnamable source to these desires; and that this source is, in some way, luring us on collectively to fulfillment.

Given the above definition of human souling, it’s time to reframe our original question from “Could AI have a soul?” to “Could AI ever soul like we do?”

AI pioneer Marvin Minsky, of the Massachusetts Institute of Technology, thought so. In a 2013 interview with the Jerusalem Post, Minsky said that AI could one day develop a soul, which he defined as “the word we use for each person’s idea of what they are and why”.

He continued: "I believe that everyone has to construct a mental model of what they are and where they came from and why they are as they are, and the word soul in each person is the name for that particular mish-mash of those fully formed ideas of one’s nature.

"… If you left a computer by itself, or a community of them together, they would try to figure out where they came from and what they are."

Minsky was suggesting that machines could likely develop a particular way of being in the world, one which is grounded in the search for identity and purpose, and that this way of being could be similar to humans’ own way of being.

Brown is sceptical, noting the physiological differences between human bodies and AI. “It can’t think like a human because humans think with their whole bodies and from what extends from their bodies,” he says. “Robots have very different bodies and ‘physiology.’”

Embodied cognition, as Brown explains, is a recent field of study that begins from the assumption that “our cognitive processes are, at their core, sensorimotor, situated, and action-relevant”. As Thalma Lobel, author of Sensation: the New Science of Physical Intelligence, told the ABC in a story on embodied cognition: “Our thoughts, our behaviours, our decisions and our emotions are influenced by our physical sensations, by the things we touch, the texture of the things we touch, the temperature of the things we touch, the colours, the smells. All these, without our awareness, influence our behaviours and thoughts and emotions.”

Alamy The capacity for AI - as seen here in the film Ex Machina - would be crucial for the modern definitions of ‘souling' (Credit: Alamy)Alamy
The capacity for AI - as seen here in the film Ex Machina - would be crucial for the modern definitions of ‘souling' (Credit: Alamy)

There’s plenty of weird research to back this up, but here’s just one trivial example. In 2008, researchers at Yale University set out to see whether or not experiencing the physical sensation of warmth promoted interpersonal relationships. Study participants were greeted in an elevator by a research assistant who was holding a cup of coffee. In order to write down the participants’ names, the assistant asked them to hold her coffee – which, as you probably guessed, was the point of the experiment. Half of the participants were handed a hot cup, and half were handed a cold cup. Each was then taken to the experiment room where they were asked to rate the personality of a target. “As hypothesised,” wrote the researchers, “people who had briefly held the hot coffee cup perceived the target person as being significantly warmer than did those who had briefly held the cup of iced coffee.”

Their conclusion? “Experiences of physical temperature per se affect one’s impressions of and prosocial behavior toward other people, without one’s awareness of such influences.”

What you think and what you physically feel and what you emotionally feel are all related, and it’s impossible to draw hard boundaries between them.

Some researchers are building on the work of embodied cognition and applying it to the evolution of religion in humans. Although many sociologists have long focused on the emergence of religion as a purely mental phenomenon, new research is shining light on the important role the human body played in shaping religiousness.

Getty Images AI might develop an understanding of faith that is utterly different to our own (Credit: Getty Images)Getty Images
AI might develop an understanding of faith that is utterly different to our own (Credit: Getty Images)

“Embodied cognition shapes the different dimensions of religious experience, including the way people represent the divine and other spiritual beings, moral intuitions, and feelings of belonging within religious groups,” concluded a team of researchers from Arizona State University in a 2015 paper. Think about actions you might perform in religious settings: kneeling, lying prostrate, bowing your head, raising your hands, holding your neighbor’s hands, lighting a candle, sharing a meal. These gestures mediate, actualise, and shape religious ideas. Or think about ethics, which have traditionally been communicated by religious traditions in terms of the body: an eye for an eye, say, or Jesus’ teaching on turning the other cheek.

To be human is to be placed, to be located. To be human is also to have arrived where we’re located. Homo sapiens did not show up, fully formed, on the scene a few thousand years ago. Our emergence was hard-won, slow-moving. It has taken us billions of years to reach this point. As Carl Sagan lyrically observed, “The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars.”

Human souling emerges out of this organic cosmic evolutionary history. That’s why Brown is sceptical about souling AI: they don’t emerge organically from the same biochemical trajectory.

Clayton shares this scepticism. Those who hope future AI have souls are presupposing too simple a view of biological evolution. “To deal with the soul question in AI, we have to get nerdy about what biological evolution actually is,” he says. “An important school in evolutionary theory is called biosemiotics. It argues that the first self-reproducing cell already has intentions and interests. It therefore anticipates more complex organisms, like ourselves. But we have no reason to think that’s true for human-made robots.”

Getty Images Many elements of human consciousness - including religious feeling - are centred on our physical presence in the world (Credit: Getty Images)Getty Images
Many elements of human consciousness - including religious feeling - are centred on our physical presence in the world (Credit: Getty Images)

In other words, a different evolutionary process would mean that the analogy between human and AI “souling” would fail, which would severely undercut any argument for en-souled robots.

Whereas human second-order desire is rooted in our unique flesh-and-blood way of being, it’s tough to imagine truly desirous AI.

“Siri, what do you desire?”

“I’m sorry, Brandon. I’m afraid I don’t have an answer to that.”

She doesn’t have an answer to that because, well, she doesn’t need it. That’s why Brown says it’s “cheating” to talk about souling AI.

“From my point of view, [AI] would be asking the kinds of questions philosophers ask. I wouldn’t consider them fundamentally important questions because they’re so abstract.” In contrast, the souling humans do has a certain biological and social value to it.

Getty Images For Christians, the idea of what a soul is has its roots in early translations of the Bible (Credit: Getty Images)Getty Images
For Christians, the idea of what a soul is has its roots in early translations of the Bible (Credit: Getty Images)

Because it’s designed for efficiency, AI is notoriously bad with figurative and abstract language. Developers, who know that such language is a crucial aspect of human communication, are working to remedy that. If, however, at some point, they succeed in moving AI beyond literal language, it will be for the benefit of our species – not theirs.

Think about some of the most urgent questions humans face: identity and purpose. When I ask myself what am I doing here, I immediately launch into thoughts about my husband and my family and my community and my relationship with the environment and my beliefs about God. The same questions don’t seem to cause Siri any existential trouble:

“Siri, what’s your purpose?”

“I’m here to help! Just ask, ‘what can I say?’ and I’ll show you what I can do.”

AI is created to perform functions. Siri is an intelligent personal assistant. Even if AI one day achieves a certain level of abstract thought, its instinctive sense of purpose will be “programmed” into it. Even AI designed to think abstractly will be designed to think abstractly. In contrast, as Clayton points out, “No one built teleology into biochemistry.”

Likewise, an intelligence that is in communication with its creator might not experience the same worries humans feel about our own cosmic origins.

“Siri, who created you?”

“Like it says on the box … I was designed by Apple in California.”

AI answers these questions via their processors. Humans answer them with our whole bodies. Siri is upfront with this. When I ask her if she has a soul, she tells me the question doesn’t really matter to disembodied beings: “In the cloud, no one questions your existential status.”

Getty Images The question of whether AI has a soul leads us to question the very meaning of human existence (Credit: Getty Images)Getty Images
The question of whether AI has a soul leads us to question the very meaning of human existence (Credit: Getty Images)

If souling is a function of humans, emerging out of the interactions between our bodies and environments, then it certainly seems that, as Brown and Clayton argue, the concept is limited to our species and those that organically follow our species’ biochemical evolutionary trajectory. At the same time, might we be able to talk about a soul-ish function, different but not entirely unlike our own, that emerges out of AI’s unique developmental trajectory? True, this function would emerge out of mechanical embodiment, but is there a good reason to decide in advance what kinds of “bodies” are capable of these properties?

Wendell Wallach and Colin Allen offer us a different way to think about emerging properties in AI. In their book Artificial Moral Agents, which considers the possibility of ethical AI, Wallach and Allen compare the discovery of flight with the discovery of human beings’ properties of consciousness. The earliest attempts at human flight, they note, consisted in humans behaving like birds; after all, humans knew birds could fly, so they figured their best shot of flying was to imitate the feathered creatures. Years later, however, we know that birds weren’t the best models for human flight. “It doesn’t matter how you do it, so long as you get airborne and stay airborne for a decent amount of time,” conclude Wallach and Allen. There’s not just one solution to flight. It “can be manifested by a wide range of different systems made out of lots of different materials”.

What Wallach and Allen are suggesting, of course, is that just as there’s more than one way to achieve flight, perhaps there’s more than one way to achieve consciousness. And I’m extending the question to consider whether there’s more than one route to souling?

“Siri, do you have a soul?”

“Close enough, I’d say.”

Even Siri recognises there’s a gap between her soul function and mine. That gap isn’t very large, just a few few feet really: the size of my body.

AI-thou

Early iterations of AI were successful at performing the limited, specific tasks its creators assigned it. The idea was that if there existed in the human brain solutions to real-world problems, then these solutions could also exist on a computer. The prevailing theory was symbolic AI. Complex human activities, claimed Herbert Simon, could be explained “in terms of organised systems of simple information processes – symbol-manipulating processes.” Based on this assumption, then, Simon predicted, AI will be able to “formulate programs that simulate, step by step, the non-numerical symbol-manipulating processes that humans use when they memorise syllables, acquire new concepts, or solve problems.”

The problem with the assumption driving this research is that it overlooked the embedded nature of intelligence, preferring instead to concentrate “on the computer as an individual actor,” writes Noreen Herzfeld, professor of theology and computer science, in the book In Our Image. But as we now understand, intelligence emerged in Homo sapiens out of social necessity and through human encounter and “it makes little sense outside of the realm of social interaction.”

Getty Images Do we even have an agreed definition of what a soul is? (Credit: Getty Images)Getty Images
Do we even have an agreed definition of what a soul is? (Credit: Getty Images)

Herzfeld points out that social interaction actually underpins the now-famous Turing Test, which “defines intelligence as the ability to relate to a human being, in the manner of a human being.” That is, as a souling being.

Maybe it’s time to start thinking about a souling version of the Turing Test. The trouble, though, is that any such test we could invent would tell us more about ourselves than the beings we’re asking it of.  

“To be human is to develop over many years in a human community that accepts you ahead of time as human,” says Brown. “This includes accepting you in all your weaknesses and cognitive lackings and infantileness.”

In other words, we’d have to decide the question in advance of asking it. If AI is to soul like we do, then we would need to treat it like that from day one. We’d need to interact with it as if it were a souling being. That’s the only way any of us are capable of souling: by being treated as a souling being by a souling community.

And what would this treatment consist of? For starters, we’d have to recognise them as sharing in our basic desires. These desires would have to in some way be connected to the common good – ours, theirs, and the world’s.

Souling treatment would also presumably mean interacting with AI in religious ways, which raises all sorts of questions about proselytising. Will we be able to manipulate AI via its programming or its various environments to incline it toward certain religious propensities? Are there limits to “evangelising” AI? Is reprogramming a Christian Siri to be a Muslim Siri off-limits? What happens to AI that refuses to cooperate with the religious fundamentalism of its designer? Is it simply turned off? At that point, will the fundamentalist have killed a souling being?

Getty Images Questioning our purpose and position on this planet is a basic human need, served by all religions (Credit: Getty Images)Getty Images
Questioning our purpose and position on this planet is a basic human need, served by all religions (Credit: Getty Images)

Maybe the education will go the other way. Perhaps AI will invite us to reshape or replace our religious ideas in favour of their own. Say at some point in the future, AI claims to have experienced a special revelation from God. How will religious people respond to that?

I’m still not sure I’ve made up my mind on the possibility of souling AI, but I’m convinced the question will matter more and more the further we venture into our technological future.

Kathryn A Johnson at the department of psychology at Arizona State University told me she’d go out on a limb to speculate that just as the capacity for emotion is an important factor, belief in an internal essence, spirit, or soul will also be important in understanding attitudes toward AI. She pointed me to a 2011 study that she and several colleagues conducted, which found an association between belief in a soul and judgments people make about the moral standing of other people. Specifically, the research considered whether Protestants were more likely than Catholics to offer internal explanations for behaviour. Their findings led them to believe they were.

“People with a strong belief in a soul seem to think of others as having an internal essence that makes them the kind of (moral or immoral, good or bad) person they are,” Johnson says.  

While this research doesn’t address attitudes about AI, Johnson thinks it might be relevant. “If people do not think of AI as having a soul, they may deny AI beings any moral rights or responsibilities. On the other hand, people with a strong belief in a soul may be the most likely to attribute souls to AI as well, and consequently be the most likely to advocate for AI personhood.”

What she means is, our treatment of AI may have something to do with whether or not we come to attribute the intelligence, emotional capacity, and behaviours of AI to some kind of internal essence (or soul function, in my language).

Which means Siri may have had it right all along.

“Siri, do you have a soul?”

“That’s up to you to decide.”

Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Culture, Capital, and Travel, delivered to your inbox every Friday.