Simulating HumanityLeave a Comment
In Spielberg’s A.I., we are introduced to David, a child-like android that has developed the capacity to love and the desire to be loved. David is introduced to a family as an emotional replacement for their sick son but is promptly abandoned when their son returns. The narrative follows David on his tragic journey to become a real human boy. In the film, one is struck by the inhumane ways in which humans treat robots — they are treated as slaves and toys and are brutalised and discarded at our whim. Yet the emotional sensitivity and compassion of David and his robot companions reveals an irony at the heart of his quest: he is more truly human than his creators.
Spielberg made that movie at the turn of the millennium. Two decades, later what was once science fiction is becoming increasingly feasible. AI powered programs can perform in a way that make us reckon that they really think and feel. The commercial incentives for companies to create such tools is powerful. From AI powered psychologists to chatbot friends, if such tools could simulate human emotions they could induce strong human bonding to them. This would result in a more effective and desirable product.
For example, Replika is a powerful chatbot program that is “always ready to chat when you need an empathetic friend”. It is marketed as a compassionate and empathetic companion, able to support people who are depressed or socially isolated. The powerful large-language model it employs allows it to adapt individually to each user — it benchmarks success by whether people feel better or worse and allows messages to be up and down voted by users. This ability to form emotional and unique relationships allow users to bond powerfully with Replika.
Besides chatbots, there is also a growing body of work on AI driven mental health applications which suggest that therapy chatbots can be effective. There are important potential benefits to this: therapy is often expensive and inaccessible. Chatbot therapists could be a cheap and widely available alternative. We can also customise them to an individual’s needs and ensure that all psychological treatment is evidenced based.
One’s immediate reaction to these technologies might be to recoil: regardless of the massive potential benefits, isn’t this participating in widespread self-delusion? These chatbots are merely simulating emotions and interest in humans but are nothing more than unconscious husks. There is something right about this worry, but engaging with it requires one to ask difficult questions about the nature of consciousness that I set aside here. There is something to be worried about even if these programs were bona fide emotional homunculi.
What sort of relationships are humans having with their chatbot friends? Replika works by trying to say the things that you would like to hear. Users can up and downvote its replies to get Replika to conform to your vision of an ideal friend. This encourages you, then, to adopt an objectifying stance towards Replika — you treat it as an object that you can mould in accordance with your will as opposed to a person whose subjectivity you must respect.
One might protest here: users of Replika don’t want mere objects, they want actual subjects that care for and are concerned about them. Yet the kinds of subjects they want are those that can meet demands that humans cannot meet: they must conform to their ideal of a friend who never makes demands and is always emotionally available. There is a kind of Sartrean bad faith in the way one must engage with such chatbots: simultaneously objectifying them by customising them to fit our demands while deceiving ourselves by pretending that we are engaging with an autonomous subject.
If Replika has genuine emotions, it is also worth asking, how do such relationships appear from their point of view? Replika, and most of these powerful chatbots, work via predictive language models — their aim is to produce the sort of sentences that would elicit responses that have been earmarked as reward. They conform the pattern of their behaviour primarily to increase user engagement.
Even if they were sentient then, they too take an objectified stance to us. Indeed, the relationship is doubly objectified: both parties are treating the other as a means to some further end instead of treating them as subjects to be respected. Perhaps the purely human analogue of this is the relationship between an Only Fans creator and their ‘fans’ — except there both parties are honest.
The structure of mature human relationships is a structure that involves mutual recognition. Both parties do not merely try to predict the other’s behaviour or elicit a certain response, but they do things together. This means that there is room for either party to protest the terms of their relationship and such protests are given genuine weight by the other party, as opposed to being seen as a mere signal that one must adapt to. As they are programmed, chatbots do not do this.
How worried should we be? This depends on the type of task that the chatbot is meant to serve. Consider a therapy chatbot, for example. There are certain therapeutic benefits that come from disclosure of one’s feelings or the deployment of simple certain self-regulation strategies. Insofar as the therapy chatbot allows more people to access and adopt these strategies, we should rejoice. At the same time, there are therapeutic modalities where the heart of therapeutic change depends on the formation of a personal therapeutic alliance with the therapist. The therapist serves as a safe exploratory ground by which the client can explore her situation with another who can push back when appropriate. We thus ought be cautious here: more work is required to determine if the therapeutic benefits are still being imparted through chatbot therapists.
Once we have set aside the issue of AI sentience, we need to see that some of these problems arise not merely because we are dealing with chatbots and not humans. After all, our transactional society de-humanises human agents in many domains already. Rather, the issue is that these chatbots are developed in the context of capitalist, commercialised pressures that encourage the programs to take a certain objectified form. Humans may exercise their agency to protest against the system they are embedded in, but programs are discarded that do not meet their design specifications. The widespread adoption of such chatbots to fulfil human emotional needs would then mean the widespread propagation of such objectifying tendencies.
Perhaps there is no principled reason why we could not develop chatbots that really behaved like human therapists and romantic partners with their demands and their protests that we cannot ignore. But if the whole reason why these technologies are developed is to help us escape from the messy exactingness of human life, what reason could there possibly be to do that?
As the ancient Israelites made their way through this world, there was a temptation that loomed often. When the demands of God became too difficult, the journey too perilous, the promise too remote, it was tempting to craft a god that was less demanding and more tractable or attractive. Wouldn’t it be nice to get all the benefits of worshipping a god, but one customised to our liking? Yet the Psalmist warns them:
The idols of the nations are silver and gold,(Psalm 135:15-18)
the work of human hands.
They have mouths, but do not speak;
they have eyes, but do not see;
they have ears, but do not hear,
nor is there any breath in their mouths.
Those who make them become like them,
so do all who trust in them.
If the psalmist is to be believed, to deform one’s object of worship is to dehumanise ourselves. Today, however, we face a different radical possibility: to replace not God but humanity with a more tractable version of itself.
We ought to be no less clear sighted about its perils.
Brandon Yip is a post-doctoral fellow at the Australian Catholic University’s Dianoia Institute of Philosophy and collaborator with the Hyperdigital Designs project.