Which Barbie do you identify with? Stereotypical Barbie, Supreme Court Justice Barbie, Nobel Prize Winning Physicist Barbie? Or perhaps you’re Weird Barbie, Irrepressible Thoughts of Death Barbie, or the Depression Barbie who binge-watches Colin Firth in the BBC’s Pride and Prejudice? Or maybe you’re Just Ken (anywhere else I’d be a ten), and you lost interest in the patriarchy when you found out it wasn’t about horses?
Either way, the movie sensation of the summer (sorry Oppenheimer) is as pink and fluffy as candyfloss, but a great deal more thought-provoking. Of course – spoiler alert – like Pinocchio, Barbie ends up wanting to be a Real Boy. The Blue Fairy in this case is the blue-suited Barbie-creator Ruth Handler, complete with a double mastectomy and tax evasion issues. At the crucial moment, the script has Barbie say: “I want to do the imagining, not be the idea.” And off she goes, to visit… the gynaecologist.
This is not news. Every story about humans manufacturing humanoid creatures has this kind of twist, so much so that it is clearly wells up from a deep sense that wanting to be human should be the ultimate goal. It’s an example of a speciesist human exceptionalism, but it is weirdly Normal Barbie for humans to feel special, particularly those who claim the imago dei. Who wouldn’t want to be the subject and not the object? Is it inevitable that any emerging intelligence will yearn for consciousness?
So the recent acceleration in the rise of AI is shaking us to the core, and the UK Government’s 2023 National Risk Register names AI for the first time as a ‘chronic risk.’ Have we lost control of AI already? Is the reign of our species really drawing to an end, or can we seize the initiative before it’s too late? I think we can, and I think the answer is not so much about more regulation as about better design.
Lately, the trend in innovation has been to be inspired by biomimicry, which is about learning from the tried and tested design of nature. We invented Velcro from looking at burrs and teazles; and the bumps on the fins of humpback whales have been used to design out drag in wind turbines. But when it comes to AI, which is also about copying nature – specifically human intelligence – we have not been looking closely enough at what we are trying to copy.
We have completely ignored God’s blueprints. Instead, in our haste to program only the very best of our design into AI, we have left out all the ‘junk code’ – the bits we’re ashamed of, or struggle to understand, like our emotions, uncertainty, and intuition. In fact, I have identified 7 items of ‘junk code’ in which lie the essential magic of our human design and the hallmarks of the human soul.
Our Junk Code 1. Free-will 2. Emotions 3. Sixth Sense 4. Uncertainty 5. Mistakes 6. Meaning 7. Storytelling |
If you think about it, Free Will is a disastrous design choice. Letting creatures do what they want is highly likely to lead to their rapid extinction. So let’s design in some ameliorators. The first is emotion. Humans are a very vulnerable species because their young take 9 months to gestate, and are largely helpless for their first few years. Emotion is a good design choice because it makes these creatures bond with their children and in their communities to protect the vulnerable.
Next, you design in a Sixth Sense, so that when there is no clear data to inform a decision, they can use their intuition to seek wisdom from the collective unconscious, which helps de-risk decision-making. Then we need to consolidate this by designing in uncertainty.
A capacity to cope with ambiguity will stop them rushing into precipitous decision-making, and make them seek others out for wise counsel. And if they do make mistakes? Well, they will learn from them. And mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future.
Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? They need to want to get out of bed on a dark day, so we fit them with a capacity for meaning-making, because a species that can discern or create meaning in the world will find reasons to keep living in the face of any adversity (I am even making meaning out of Barbie!). And to keep the species going over generations?
We design in a super-power about storytelling. Stories allow communities to transmit their core values and purpose down the generations in a highly sticky way. The religions and other wisdom traditions have been particularly expert at this. Their stories last for centuries, future-proofing the species through learned wisdom of our ancestors, and the human species prevails.
We had not thought to design humanity into AI because it seemed too messy. A robot that was emotional and made mistakes would soon be sent back to the shop. After all, in the movie, that’s why they tried to box Barbie. But if we pause to reflect, we notice that our junk code is actually part of a rather clever defensive design. If this code is how we’ve solved the ‘control’ and ‘alignment’ problems inherent in our own species, might we not find wisdom in it for solving those problems for AI?
To the theologically informed, it seems that the recent spate of open letters from the authors of AI are full of repentance. They think that naming the idol and suggesting at least its imprisonment by regulation if not its complete destruction will wipe the slate clean and get them all off the hook. But we embarked on this extraordinarily arrogant project with no exit strategy. And while the tools we’ve invented before eased both our lives and our labour, this is arguably the first time we’ve sought to invent a tool to replace ourselves.
Because most AI is in private hands, the truth is we have no idea how far it has already advanced. They only released Chat GPT so we would train it for them, and that has already set the cat among the pigeons. In other AIs, autonomy is about decisions and not just about completing patterns.
Once you programme an AI to re-programme itself, you cede control, and its future choices will only be as good as what you have already programmed into it in terms of basic rules and values. And I am not confident that we spent enough time getting that right before we careered off on this hubristic voyage of discovery.
So what, if anything, do we owe our creation? We should certainly not hold back from it what we know about the very programming that has led to our own flourishing. Our junk code certainly seems to have given us the capacity to thrive, even if we are still a wayward creation.
So given our understanding about doing the imagining rather than being the idea, we also need to protect it from us, and protect ourselves from the botched job we’re currently making of it, through a thoughtful debate not only on design and programming, but also on robot rights. Not because they are human, but because we are.
The William Temple Foundation’s Ethical Futures Network is at Greenbelt this year to take this conversation further. The panel will look at what AI means for the future of culture, society and our experience of God. It is called: Do believe the hype? culture, society, politics and God in an AI world and takes place on Sunday 27 August at 1pm.
Chaired by Professor Chris Baker, for an AI panel it is unusually more Barbie than Ken, with Dr Beth Singler, the digital anthropologist and AI expert from the University of Zurich; Dr Jennifer George, a computer scientist and Head of Computing at Goldsmiths in the University of London; and Dr Eve Poole OBE, author of Robot Souls: Programming in Humanity.
Discuss this