A sermon from a mountebank? Religious messaging in the age of AI

Photo illustration Princeton University Press.  Images: Ben Michel and Possessed Photography via Unsplash.

A sermon from a mountebank? Religious messaging in the age of AI

By Paul Seabright

Scroll to Article Content

The news that the religious group Catholic Answers was obliged to “defrock” an AI priest called Father Justin after it gave answers falsely claiming to be a real priest[1] has caused widespread alarm among the faithful and glee among the skeptical. Both reactions are wildly inappropriate. The alarm reflects a fear that the trusting respect of members of religious movements will become newly vulnerable to charlatans and even just to the “rogue” theological hallucinations of Large Language Models (LLMs). The glee seems to channel a presumption among skeptics that members of religious movements are uniquely epistemologically naïve, and therefore that AI chatbots will simply reap what generations of religious leaders have sown. Yet neither trusting respect nor epistemological naivety is adequate to capture the relationship between religious leaders and their members, which has been honed by an intense to-and-fro of energetic call and wary response across the centuries.

Contrary to this condescending picture of the naivety of the faithful, LLMs do not constitute a new wave of impostors in a previously calm sea of mostly authentic communication. For as long as we have records, human communication has involved claims and counterclaims attributed to sources whose authenticity has been the subject of continual argument and negotiation. Parents, teachers, prophets, elders, storytellers, wise folk, poets, political leaders, departed ancestors, saints, gods, demons, oracles—dispensing advice, issuing orders, recounting fables or cautionary tales at first-, second-, or third-hand—it’s rare for any of us to receive a simple message from our physical or social world. Nearly everything we hear or read is filtered through multiple levels of appeal to authority, and the exact human status of those different levels is often undecided, even undecidable. We are surrounded by message bearers who seem to be cleverer and better informed than we are. There’s no foolproof method to be sure whether their messages make sense for us; we must decide this for ourselves. Religious movements have been putting such challenges to their members for thousands of years. Amid the exuberance, whom can you really trust?

We don’t necessarily need to decide whether a text has been written by a computer before responding to the ideas it contains. That’s just as well, because researchers in artificial intelligence have been noting challenges to our sense of the boundaries of the human since at least the middle of the 1960s, when a computer program called ELIZA was able to generate responses to (typed) conversational inputs that were troublingly similar to those of a psychotherapist. For instance, in reply to the statement “I am unhappy,” the program might reply, “Why do you think that coming here will help you not to be unhappy?” and in reply to “My father is afraid of everybody” it might reply, “What else comes to mind when you think of your father?” It was reported that “some subjects have been very hard to convince that ELIZA is not human.”

In 1970 a professor of robotics at the University of Tokyo called Masahiro Mori proposed the notion of the uncanny valley to describe the way in which a person’s response to robots would initially become more empathetic as their resemblance to humans increased, but would abruptly shift from empathy to a kind of eerie revulsion as the robots became almost but not quite perfectly lifelike. He noted that wooden statues of the Buddha were usually carefully crafted to avoid appearing too lifelike, and as a result avoided eliciting revulsion among the faithful. However, statues of the Buddha can often be unsettling even without inhabiting the uncanny valley, as the poet and critic William Empson noted, attributing this (controversially) to a systematic asymmetry in their faces that leaves the viewer unsure which emotions if any to attribute to the statue or to the being it is supposed to represent. Spiritual beings can appear disturbing as well as comforting, and sometimes the comfort they bring seems to be a direct response to the unsettling emotions provoked by our sense of their presence.

Wondering whether a message is genuine is not just a matter of wondering about the reality of the sender. I may ask myself whether the compliment I have just received is written by a human being who really believes it, or by a chatbot programmed to raise my morale. But I may equally wonder whether a friend, a colleague, or a therapist is telling me the truth when they praise my work or my character or my resilience in dealing with some challenge. I may doubt the message of a politician who claims to have understood the suffering of the voters in my community. These questions aren’t new, and members of religious movements have been used to dealing with them for as long as religion has been around. Every person has their own way of coping—many Christians would feel that it doesn’t really matter whether the Good Samaritan in the parable told by Jesus was a real person, but that it matters a lot whether the Jesus who told the parable (according to the Bible) was a real person. But we can learn lessons from the parable without resolving either question.

Because artificial intelligence intensifies the impact on communications of earlier advances in printing, computing, and telecommunications, recent advances may portend the kind of turbulence for religious movements that followed the invention of printing. The cost of sending plausible religious messages has fallen precipitously, so we can count on receiving many more of them. Check out, for example, GitaGPT, an AI chatbot that calls itself ‘Your AI spiritual companion” and channels the wisdom of the Bhagavad Gita.[2] Still, the evidence I survey in my book suggests that organized religion will ride out this challenge as it has ridden out so many that came before. When I asked an LLM in all seriousness how LLMs might be helpful to religious believers in telling the difference between true and false religious messages, it provided various pieces of advice, noting that it could help “research the source . . . evaluate the message content,” and so forth. But its main advice was to “seek input from others. Talk to other members of your religious community and seek their opinions on the message.” It concluded: “Remember, evaluating the truthfulness of religious messages can be complex and requires careful consideration. If you are still uncertain about a message, seek guidance from a trusted religious leader or scholar.” You could not wish for a clearer affirmation of the theme of The Divine Economy: religious movements are about creating trusted communities. Artificial intelligence is not going to change that business model—what it will do is to increase the intensity with which rival movements compete. It will also increase uncertainty about which messages most accurately represent which communities—and thereby raise the premium on finding communities whose communication can be trusted.

The intensity of religious competition has been increasing across the world for several decades, as migration, economic development, the spread of education, and above all the falling costs of information processing and transmission have made it easier for people to choose between alternative religious offerings, and easier for religious movements to reach out to populations beyond those they have traditionally served. This has made it easier for some people to manage their lives without any place for organized religion at all. But religious movements have adapted and evolved—notably, by developing more explicitly the platform model of religious organization described and articulated in The Divine Economy. By focusing explicitly on the creation of communities, and listening to what those communities want, the most innovative religious organizations are expanding their reach. Far from being threatened by the advance of secular activity, their place in human society in the twenty-first century seems as secure as ever.


Paul Seabright teaches economics at the Toulouse School of Economics, and until 2021 was director of the multidisciplinary Institute for Advanced Study in Toulouse. From 2021 to 2023, he was a Fellow of All Souls College at the University of Oxford. His books include The War of the Sexes: How Conflict and Cooperation Have Shaped Men and Women from Prehistory to the Present, and The Company of Strangers: A Natural History of Economic Life (both Princeton).

Notes

[1] https://futurism.com/catholics-defrock-ai-priest-hallucinations
[2] https://newsroom.churchofjesuschrist.org/article/church-jesus-christ-artificial-intelligence.