Shaping debate on religion in public life.

Tag Archive: Artificial Intelligence

Simulating Humanity

Leave a Comment

In Spielberg’s A.I., we are introduced to David, a child-like android that has developed the capacity to love and the desire to be loved. David is introduced to a family as an emotional replacement for their sick son but is promptly abandoned when their son returns. The narrative follows David on his tragic journey to become a real human boy. In the film, one is struck by the inhumane ways in which humans treat robots — they are treated as slaves and toys and are brutalised and discarded at our whim. Yet the emotional sensitivity and compassion of David and his robot companions reveals an irony at the heart of his quest: he is more truly human than his creators.

Spielberg made that movie at the turn of the millennium. Two decades, later what was once science fiction is becoming increasingly feasible. AI powered programs can perform in a way that make us reckon that they really think and feel. The commercial incentives for companies to create such tools is powerful. From AI powered psychologists to chatbot friends, if such tools could simulate human emotions they could induce strong human bonding to them. This would result in a more effective and desirable product.

For example, Replika is a powerful chatbot program that is “always ready to chat when you need an empathetic friend”. It is marketed as a compassionate and empathetic companion, able to support people who are depressed or socially isolated. The powerful large-language model it employs allows it to adapt individually to each user — it benchmarks success by whether people feel better or worse and allows messages to be up and down voted by users. This ability to form emotional and unique relationships allow users to bond powerfully with Replika.

Besides chatbots, there is also a growing body of work on AI driven mental health applications which suggest that therapy chatbots can be effective. There are important potential benefits to this: therapy is often expensive and inaccessible. Chatbot therapists could be a cheap and widely available alternative. We can also customise them to an individual’s needs and ensure that all psychological treatment is evidenced based.

One’s immediate reaction to these technologies might be to recoil: regardless of the massive potential benefits, isn’t this participating in widespread self-delusion? These chatbots are merely simulating emotions and interest in humans but are nothing more than unconscious husks. There is something right about this worry, but engaging with it requires one to ask difficult questions about the nature of consciousness that I set aside here. There is something to be worried about even if these programs were bona fide emotional homunculi.

What sort of relationships are humans having with their chatbot friends? Replika works by trying to say the things that you would like to hear. Users can up and downvote its replies to get Replika to conform to your vision of an ideal friend. This encourages you, then, to adopt an objectifying stance towards Replika — you treat it as an object that you can mould in accordance with your will as opposed to a person whose subjectivity you must respect.

One might protest here: users of Replika don’t want mere objects, they want actual subjects that care for and are concerned about them. Yet the kinds of subjects they want are those that can meet demands that humans cannot meet: they must conform to their ideal of a friend who never makes demands and is always emotionally available. There is a kind of Sartrean bad faith in the way one must engage with such chatbots: simultaneously objectifying them by customising them to fit our demands while deceiving ourselves by pretending that we are engaging with an autonomous subject.

If Replika has genuine emotions, it is also worth asking, how do such relationships appear from their point of view? Replika, and most of these powerful chatbots, work via predictive language models — their aim is to produce the sort of sentences that would elicit responses that have been earmarked as reward. They conform the pattern of their behaviour primarily to increase user engagement.

Even if they were sentient then, they too take an objectified stance to us. Indeed, the relationship is doubly objectified: both parties are treating the other as a means to some further end instead of treating them as subjects to be respected. Perhaps the purely human analogue of this is the relationship between an Only Fans creator and their ‘fans’ — except there both parties are honest.

The structure of mature human relationships is a structure that involves mutual recognition. Both parties do not merely try to predict the other’s behaviour or elicit a certain response, but they do things together. This means that there is room for either party to protest the terms of their relationship and such protests are given genuine weight by the other party, as opposed to being seen as a mere signal that one must adapt to. As they are programmed, chatbots do not do this.

How worried should we be? This depends on the type of task that the chatbot is meant to serve. Consider a therapy chatbot, for example. There are certain therapeutic benefits that come from disclosure of one’s feelings or the deployment of simple certain self-regulation strategies. Insofar as the therapy chatbot allows more people to access and adopt these strategies, we should rejoice. At the same time, there are therapeutic modalities where the heart of therapeutic change depends on the formation of a personal therapeutic alliance with the therapist. The therapist serves as a safe exploratory ground by which the client can explore her situation with another who can push back when appropriate. We thus ought be cautious here: more work is required to determine if the therapeutic benefits are still being imparted through chatbot therapists.

Once we have set aside the issue of AI sentience, we need to see that some of these problems arise not merely because we are dealing with chatbots and not humans. After all, our transactional society de-humanises human agents in many domains already. Rather, the issue is that these chatbots are developed in the context of capitalist, commercialised pressures that encourage the programs to take a certain objectified form. Humans may exercise their agency to protest against the system they are embedded in, but programs are discarded that do not meet their design specifications. The widespread adoption of such chatbots to fulfil human emotional needs would then mean the widespread propagation of such objectifying tendencies.

Perhaps there is no principled reason why we could not develop chatbots that really behaved like human therapists and romantic partners with their demands and their protests that we cannot ignore. But if the whole reason why these technologies are developed is to help us escape from the messy exactingness of human life, what reason could there possibly be to do that?

As the ancient Israelites made their way through this world, there was a temptation that loomed often. When the demands of God became too difficult, the journey too perilous, the promise too remote, it was tempting to craft a god that was less demanding and more tractable or attractive. Wouldn’t it be nice to get all the benefits of worshipping a god, but one customised to our liking? Yet the Psalmist warns them:

The idols of the nations are silver and gold,
the work of human hands.
They have mouths, but do not speak;
they have eyes, but do not see;
they have ears, but do not hear,
nor is there any breath in their mouths.
Those who make them become like them,
so do all who trust in them.

(Psalm 135:15-18)

If the psalmist is to be believed, to deform one’s object of worship is to dehumanise ourselves. Today, however, we face a different radical possibility: to replace not God but humanity with a more tractable version of itself.

We ought to be no less clear sighted about its perils.

Brandon Yip is a post-doctoral fellow at the Australian Catholic University’s Dianoia Institute of Philosophy and collaborator with the Hyperdigital Designs project.

Share this page:

Robot Souls

Leave a Comment

Which Barbie do you identify with? Stereotypical Barbie, Supreme Court Justice Barbie, Nobel Prize Winning Physicist Barbie? Or perhaps you’re Weird Barbie, Irrepressible Thoughts of Death Barbie, or the Depression Barbie who binge-watches Colin Firth in the BBC’s Pride and Prejudice? Or maybe you’re Just Ken (anywhere else I’d be a ten), and you lost interest in the patriarchy when you found out it wasn’t about horses?

Either way, the movie sensation of the summer (sorry Oppenheimer) is as pink and fluffy as candyfloss, but a great deal more thought-provoking. Of course – spoiler alert – like Pinocchio, Barbie ends up wanting to be a Real Boy. The Blue Fairy in this case is the blue-suited Barbie-creator Ruth Handler, complete with a double mastectomy and tax evasion issues. At the crucial moment, the script has Barbie say: “I want to do the imagining, not be the idea.” And off she goes, to visit… the gynaecologist.

This is not news. Every story about humans manufacturing humanoid creatures has this kind of twist, so much so that it is clearly wells up from a deep sense that wanting to be human should be the ultimate goal. It’s an example of a speciesist human exceptionalism, but it is weirdly Normal Barbie for humans to feel special, particularly those who claim the imago dei. Who wouldn’t want to be the subject and not the object? Is it inevitable that any emerging intelligence will yearn for consciousness?

So the recent acceleration in the rise of AI is shaking us to the core, and the UK Government’s 2023 National Risk Register names AI for the first time as a ‘chronic risk.’ Have we lost control of AI already? Is the reign of our species really drawing to an end, or can we seize the initiative before it’s too late? I think we can, and I think the answer is not so much about more regulation as about better design.

Lately, the trend in innovation has been to be inspired by biomimicry, which is about learning from the tried and tested design of nature. We invented Velcro from looking at burrs and teazles; and the bumps on the fins of humpback whales have been used to design out drag in wind turbines. But when it comes to AI, which is also about copying nature – specifically human intelligence – we have not been looking closely enough at what we are trying to copy.

We have completely ignored God’s blueprints. Instead, in our haste to program only the very best of our design into AI, we have left out all the ‘junk code’ – the bits we’re ashamed of, or struggle to understand, like our emotions, uncertainty, and intuition. In fact, I have identified 7 items of ‘junk code’ in which lie the essential magic of our human design and the hallmarks of the human soul.

Our Junk Code  

1. Free-will
2. Emotions
3. Sixth Sense
4. Uncertainty
5. Mistakes
6. Meaning
7. Storytelling  

If you think about it, Free Will is a disastrous design choice. Letting creatures do what they want is highly likely to lead to their rapid extinction. So let’s design in some ameliorators. The first is emotion. Humans are a very vulnerable species because their young take 9 months to gestate, and are largely helpless for their first few years. Emotion is a good design choice because it makes these creatures bond with their children and in their communities to protect the vulnerable.

Next, you design in a Sixth Sense, so that when there is no clear data to inform a decision, they can use their intuition to seek wisdom from the collective unconscious, which helps de-risk decision-making. Then we need to consolidate this by designing in uncertainty.

A capacity to cope with ambiguity will stop them rushing into precipitous decision-making, and make them seek others out for wise counsel. And if they do make mistakes? Well, they will learn from them. And mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future.

Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? They need to want to get out of bed on a dark day, so we fit them with a capacity for meaning-making, because a species that can discern or create meaning in the world will find reasons to keep living in the face of any adversity (I am even making meaning out of Barbie!). And to keep the species going over generations?

We design in a super-power about storytelling. Stories allow communities to transmit their core values and purpose down the generations in a highly sticky way. The religions and other wisdom traditions have been particularly expert at this. Their stories last for centuries, future-proofing the species through learned wisdom of our ancestors, and the human species prevails.

We had not thought to design humanity into AI because it seemed too messy. A robot that was emotional and made mistakes would soon be sent back to the shop. After all, in the movie, that’s why they tried to box Barbie. But if we pause to reflect, we notice that our junk code is actually part of a rather clever defensive design. If this code is how we’ve solved the ‘control’ and ‘alignment’ problems inherent in our own species, might we not find wisdom in it for solving those problems for AI?

To the theologically informed, it seems that the recent spate of open letters from the authors of AI are full of repentance. They think that naming the idol and suggesting at least its imprisonment by regulation if not its complete destruction will wipe the slate clean and get them all off the hook. But we embarked on this extraordinarily arrogant project with no exit strategy. And while the tools we’ve invented before eased both our lives and our labour, this is arguably the first time we’ve sought to invent a tool to replace ourselves.

Because most AI is in private hands, the truth is we have no idea how far it has already advanced. They only released Chat GPT so we would train it for them, and that has already set the cat among the pigeons. In other AIs, autonomy is about decisions and not just about completing patterns.

Once you programme an AI to re-programme itself, you cede control, and its future choices will only be as good as what you have already programmed into it in terms of basic rules and values. And I am not confident that we spent enough time getting that right before we careered off on this hubristic voyage of discovery.

So what, if anything, do we owe our creation? We should certainly not hold back from it what we know about the very programming that has led to our own flourishing. Our junk code certainly seems to have given us the capacity to thrive, even if we are still a wayward creation.

So given our understanding about doing the imagining rather than being the idea, we also need to protect it from us, and protect ourselves from the botched job we’re currently making of it, through a thoughtful debate not only on design and programming, but also on robot rights. Not because they are human, but because we are.


The William Temple Foundation’s Ethical Futures Network is at Greenbelt this year to take this conversation further. The panel will look at what AI means for the future of culture, society and our experience of God. It is called: Do believe the hype? culture, society, politics and God in an AI world and takes place on Sunday 27 August at 1pm.

Chaired by Professor Chris Baker, for an AI panel it is unusually more Barbie than Ken, with Dr Beth Singler, the digital anthropologist and AI expert from the University of Zurich; Dr Jennifer George, a computer scientist and Head of Computing at Goldsmiths in the University of London; and Dr Eve Poole OBE, author of Robot Souls: Programming in Humanity.

Share this page: