Shaping debate on religion in public life.

Tag Archive: Technology

Robot Souls

Leave a Comment

Which Barbie do you identify with? Stereotypical Barbie, Supreme Court Justice Barbie, Nobel Prize Winning Physicist Barbie? Or perhaps you’re Weird Barbie, Irrepressible Thoughts of Death Barbie, or the Depression Barbie who binge-watches Colin Firth in the BBC’s Pride and Prejudice? Or maybe you’re Just Ken (anywhere else I’d be a ten), and you lost interest in the patriarchy when you found out it wasn’t about horses?

Either way, the movie sensation of the summer (sorry Oppenheimer) is as pink and fluffy as candyfloss, but a great deal more thought-provoking. Of course – spoiler alert – like Pinocchio, Barbie ends up wanting to be a Real Boy. The Blue Fairy in this case is the blue-suited Barbie-creator Ruth Handler, complete with a double mastectomy and tax evasion issues. At the crucial moment, the script has Barbie say: “I want to do the imagining, not be the idea.” And off she goes, to visit… the gynaecologist.

This is not news. Every story about humans manufacturing humanoid creatures has this kind of twist, so much so that it is clearly wells up from a deep sense that wanting to be human should be the ultimate goal. It’s an example of a speciesist human exceptionalism, but it is weirdly Normal Barbie for humans to feel special, particularly those who claim the imago dei. Who wouldn’t want to be the subject and not the object? Is it inevitable that any emerging intelligence will yearn for consciousness?

So the recent acceleration in the rise of AI is shaking us to the core, and the UK Government’s 2023 National Risk Register names AI for the first time as a ‘chronic risk.’ Have we lost control of AI already? Is the reign of our species really drawing to an end, or can we seize the initiative before it’s too late? I think we can, and I think the answer is not so much about more regulation as about better design.

Lately, the trend in innovation has been to be inspired by biomimicry, which is about learning from the tried and tested design of nature. We invented Velcro from looking at burrs and teazles; and the bumps on the fins of humpback whales have been used to design out drag in wind turbines. But when it comes to AI, which is also about copying nature – specifically human intelligence – we have not been looking closely enough at what we are trying to copy.

We have completely ignored God’s blueprints. Instead, in our haste to program only the very best of our design into AI, we have left out all the ‘junk code’ – the bits we’re ashamed of, or struggle to understand, like our emotions, uncertainty, and intuition. In fact, I have identified 7 items of ‘junk code’ in which lie the essential magic of our human design and the hallmarks of the human soul.

Our Junk Code  

1. Free-will
2. Emotions
3. Sixth Sense
4. Uncertainty
5. Mistakes
6. Meaning
7. Storytelling  

If you think about it, Free Will is a disastrous design choice. Letting creatures do what they want is highly likely to lead to their rapid extinction. So let’s design in some ameliorators. The first is emotion. Humans are a very vulnerable species because their young take 9 months to gestate, and are largely helpless for their first few years. Emotion is a good design choice because it makes these creatures bond with their children and in their communities to protect the vulnerable.

Next, you design in a Sixth Sense, so that when there is no clear data to inform a decision, they can use their intuition to seek wisdom from the collective unconscious, which helps de-risk decision-making. Then we need to consolidate this by designing in uncertainty.

A capacity to cope with ambiguity will stop them rushing into precipitous decision-making, and make them seek others out for wise counsel. And if they do make mistakes? Well, they will learn from them. And mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future.

Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? They need to want to get out of bed on a dark day, so we fit them with a capacity for meaning-making, because a species that can discern or create meaning in the world will find reasons to keep living in the face of any adversity (I am even making meaning out of Barbie!). And to keep the species going over generations?

We design in a super-power about storytelling. Stories allow communities to transmit their core values and purpose down the generations in a highly sticky way. The religions and other wisdom traditions have been particularly expert at this. Their stories last for centuries, future-proofing the species through learned wisdom of our ancestors, and the human species prevails.

We had not thought to design humanity into AI because it seemed too messy. A robot that was emotional and made mistakes would soon be sent back to the shop. After all, in the movie, that’s why they tried to box Barbie. But if we pause to reflect, we notice that our junk code is actually part of a rather clever defensive design. If this code is how we’ve solved the ‘control’ and ‘alignment’ problems inherent in our own species, might we not find wisdom in it for solving those problems for AI?

To the theologically informed, it seems that the recent spate of open letters from the authors of AI are full of repentance. They think that naming the idol and suggesting at least its imprisonment by regulation if not its complete destruction will wipe the slate clean and get them all off the hook. But we embarked on this extraordinarily arrogant project with no exit strategy. And while the tools we’ve invented before eased both our lives and our labour, this is arguably the first time we’ve sought to invent a tool to replace ourselves.

Because most AI is in private hands, the truth is we have no idea how far it has already advanced. They only released Chat GPT so we would train it for them, and that has already set the cat among the pigeons. In other AIs, autonomy is about decisions and not just about completing patterns.

Once you programme an AI to re-programme itself, you cede control, and its future choices will only be as good as what you have already programmed into it in terms of basic rules and values. And I am not confident that we spent enough time getting that right before we careered off on this hubristic voyage of discovery.

So what, if anything, do we owe our creation? We should certainly not hold back from it what we know about the very programming that has led to our own flourishing. Our junk code certainly seems to have given us the capacity to thrive, even if we are still a wayward creation.

So given our understanding about doing the imagining rather than being the idea, we also need to protect it from us, and protect ourselves from the botched job we’re currently making of it, through a thoughtful debate not only on design and programming, but also on robot rights. Not because they are human, but because we are.


The William Temple Foundation’s Ethical Futures Network is at Greenbelt this year to take this conversation further. The panel will look at what AI means for the future of culture, society and our experience of God. It is called: Do believe the hype? culture, society, politics and God in an AI world and takes place on Sunday 27 August at 1pm.

Chaired by Professor Chris Baker, for an AI panel it is unusually more Barbie than Ken, with Dr Beth Singler, the digital anthropologist and AI expert from the University of Zurich; Dr Jennifer George, a computer scientist and Head of Computing at Goldsmiths in the University of London; and Dr Eve Poole OBE, author of Robot Souls: Programming in Humanity.

Share this page:

Hyperdigital Designs: A Report on Cybernetic Grammar at its Highest Point

Comments Off on Hyperdigital Designs: A Report on Cybernetic Grammar at its Highest Point

With the heaven-sent speed of Hermes, computers calculate in writing to shape the grammar of the world.  Although analysable into binary algebra, the calculations of computers are more than mathematical, and more than mechanical. For if computers can be said to write the script of their mechanical operations, and to do so with a grammar that is uniquely their own, then the grammar of cybernetic engines must exceed beyond, and enter in so as to shape the motion of any machine. And if, in shaping this motion, computers continuously gesture beyond the immanent frame of their mechanical operations, then we should investigate the cybernetic grammar of the digital from its highest points.

It is this higher way of writing of the grammar of computers that we have begun to investigate.  On Wednesday 14 June, we convened the Hyperdigital Designs workshop at the University of Cambridge for the purpose of exploring the hyperbolic cybernetic grammar of computers.  This workshop was hosted by Cambridge Digital Humanities, and co-sponsored by the William Temple Foundation and the Diverse Intelligence Summer Institute (DISI).

Hyperdigital Designs Workshop
https://www.cdh.cam.ac.uk/events/36499/

During the workshop, the ‘How to Play with Fire’ team of DISI 2022 hosted sixteen invited guest speakers to contribute papers reflecting on the significance of what we have begun to call the ‘hyperdigital’ for theology, philosophy, ethics, politics, and the arts.

The hyperdigital designates a higher or hyperbolic reflection on the creative origins and free use of the cybernetic grammar of computers.  It can be called ‘hyper-digital’ in the sense of a ‘hyperbole’ (ὑπερβολή) or excess of signification, in which cybernetic judgments both exceed beyond and enter in to animate the free creation and use of digital techniques. 

The hyperdigital can be doubly contrasted with the ‘digital’, which scripts the algebraic calculation of mechanical operations, and the ‘postdigital’, which reflects upon an indefinite bricolage of conceptually evacuated relations of material entanglement. 

Beyond both the ‘digital’ and the ‘postdigital’, the ‘Hyperdigital’ is a hyperbolic cybernetic grammar, which, in the sense of a hyperbole, exceeds so as more radically to enter and accelerate the free use of digital computation and communication – whether among the creators of digital systems, or from the oldest creator of the idea of the digital itself.

The ‘hyperdigital’ had been conceived at the 2022 Diverse Intelligence Summer Institute (DISI 2022) at the University of St. Andrews by the ‘How to Play with Fire’ team, consisting of Ryan Haecker, Jenny Liu Zhang, and Brandon Yip, with the later addition of Olivia Thomas.

During the course of DISI 2022, we argued that the postdigital had failed to accommodate the higher reflections upon the creative source of the idea and calculation of the digital.  Instead, it had recirculated the grammatical rupture and ontological violence of the digital in an apocalyptic rhetoric of the crash and release of the coherence of digital systems.

Since the conclusion of DISI 2022, the How to Play with Fire team has continued to meet for monthly discussions of recent developments in the philosophy of technology, especially as it relates to information, cybernetics, and the cybernetic grammar of computers.

At the conclusion of our year-long collaborative project, the How to Play with Fire team convened the Hyperdigital Designs workshop at the University of Cambridge, with financial and administrative assistance generously provided by Cambridge Digital Humanities.

We enjoyed a wonderfully thoughtful day examining the hyperdigital, as well as imagining solutions to promote human flourishing. Some key points of discussion included:

The videos, presentations, and photos from this workshop can be found in the links below:

Video Playlist
https://www.youtube.com/playlist?list=PL-d6U1dRcJENeqpiM9xDd6_FzYlF1-3PG

Presentation slides
https://drive.google.com/drive/folders/1zGOzz601K–xX89Wk_7BFM0UGa2ve-Da?usp=sharing

Photos
https://drive.google.com/drive/folders/1dLV4TWigiu4j2x3UMtMrHLTTztGoDAz1?usp=sharing

Following the Hyperdigital Designs workshop, Ryan Haecker, a research fellow of the William Temple Foundation, has published a new peer-reviewed article in Postdigital Science and Education, titled ‘Via Digitalis: From the Postdigital to the Hyperdigital’. He argues three theses: the postdigital has failed; postdigital theology is incompatible with Christian theology; and, for mystical theology, the hyperdigital is the truth of the postdigital.

With the publication of this article, he has presented a summary of the year-long collaboration of the ‘How to Play with Fire’ team at the Diverse Intelligence Summit 2023 at the University of St. Andrews.  A video recording of his talk can be found at the link below:

Video
https://youtu.be/xhpV0lV-hTg

In the future, we hope to publish the proceedings of the Hyperdigital Designs workshop in an edited volume. We invite expressions of interest in collaborating on this future project.

Email
hyperdigitaldesigns@gmail.com

Twitter
https://twitter.com/HyperdigitalDes

YouTube
https://www.youtube.com/channel/UCVN_wwMfYJDp2cVsBElAcVg

Finally, to continue engaging with the lively threads inspired at this workshop, please join us in our new community Discord server, Hyperdigital Designs. We will use this as an online hub to discuss ideas, share publications and projects, and stay connected about all things related to human flourishing while navigating the hyperdigital.

Discord Server
https://discord.gg/ZqwkUYNVT4

The Hyperdigital Designs Team

Ryan Haecker (University of Austin)

Jenny Liu Zhang (University of Edinburgh)

Brandon Yip (Australian National University)

Olivia Thomas (University of Edinburgh)

Share this page: