Remarkable study examines the impact of real-life “robot preachers”

(Photo credit: OpenAI's DALL·E)

A trio of studies—one conducted in a recently automated Buddhist temple, another in a Taoist temple, and a third online—revealed that religious followers perceive robot preachers and the institutions employing them as less credible than their human counterparts. This perception stems from the belief that robots lack consciousness, thereby diminishing their credibility as preachers. The research was published in the Journal of Experimental Psychology General.

The past decade has seen a great increase in the capabilities of automated and artificial intelligence (AI) systems. This has initiated a transformation of both the global society and economy that is of such an extent that many believe that we are witnessing the start of an AI revolution that will completely transform the way we work and live.

Artificial intelligence systems are now used with great effect in many domains that were traditionally considered exclusively human. While automated systems have been used in manufacturing and gaming for decades, recent advancements have enabled AI systems to penetrate areas such as medicine, journalism, psychotherapy, and even prostitution. Many studies show that people find it very hard, or even impossible, to distinguish between art created by AI and that made by humans by mere observation.

Recently, attempts were made to have robots take on roles of religious professionals. Mindar, a robot designed to look like the Buddhist deity of mercy, made headlines back in 2019 after it began giving religious services in a temple in Japan. While still rare, there were instances of robots being installed to perform similar roles in several Christian churches of different denominations.

In their new study, Joshua Conrad Jackson of the University of Chicago and his colleagues hypothesized that although robot preachers may effectively convey religious content, they may struggle with credibility. Robots are perceived as lacking a mind, devoid of the ability to feel, understand, think, or make decisions. Consequently, robots are seen as incapable of genuinely believing in supernatural entities, failing to demonstrate the deep commitment to faith expected of religious professionals. To explore this, the researchers conducted three studies.

In the first study, the researchers surveyed individuals leaving the Kodaiji Temple after a sermon. Kodaiji Temple is a large Buddhist Temple in Kyoto’s Higashiyama District (in Japan), a place where the famous robot preacher Mindar was installed. Jackson and his colleagues surveyed individuals after a sermon given by Mindar and a sermon given by a human preacher. Mindar and human preachers delivered sermons in different buildings. During the 6-week observation period, the study authors surveyed 498 participants. Their average age was 46 years and 228 were women.

The researchers offered 1000 yen (just under $7) to prospective survey participants. Once accepted, participants were given the option to donate a portion of this money to the temple. The amount donated served as an indicator of their religious commitment. Additionally, participants rated preachers’ credibility (e.g. “The robot [human] priest acts as a good religious role model”) and completed an assessment of their religious beliefs and moral values.

The second study was an experiment conducted in a Taoist temple in Singapore, where the researchers randomly assigned sermons to be delivered either by a robot or a human preacher. A total of 239 temple visitors participated in the study.

The researchers assessed their religious commitment and had them rate the credibility of the preacher in the same way they used in study 1. Additionally, these individuals completed an assessment of perceptions of robots’ minds i.e., their ability to have experiences (e.g., “Robots can have desires”) and their agency (e.g., “Robots can think”).

Study 3 was an online experiment in which participants were told that a sermon they were shown was either generated by an advanced AI program or by a human preacher. Participants were 300 Amazon MTurk workers who read an excerpt from a sermon allegedly written by an AI or a human. After that, they completed an assessment of religious commitment (e.g. “I would consider donating money to my church”; “I would consider telling strangers to join my place of worship”)

The participants also rated the credibility of the alleged author of the sermon, his perceived possession of a mind (e.g. “The person [AI] who wrote this sermon is probably capable of thinking and planning”, “The person [AI] who wrote this sermon is probably capable of hunger and thirst”), his charisma and likability (“The person [AI] who wrote this sermon is probably likable”, “The person [AI] who wrote this sermon is probably charismatic.”), credibility of the person who trained the robot, and the extent to which they believed AI could be human-like (AI anthropomorphism).

Results of study 1 showed that participants who viewed Mindar were less likely to believe in God compared to participants who viewed a human preacher. They also donated only 26% of their compensation on average to the church, compared to 44% donated by those who saw a human preacher. Participants also found Mindar to be substantially less credible than a human preacher. Further analysis showed that whether the preacher was a robot or a human was associated with whether individuals will donate to the temple or not, but not with how much they will donate.

Results of study 2 largely mirrored the first study. Participants rated the robot preacher as having lower credibility than the human preacher. This extended to the credibility of the whole temple – it was rated as lower after participants attended a robot-led survey. Participants donated less after a robot-led sermon in this study as well.

Results of study 3 confirmed the previous findings – AI authors of sermons were perceived as less credible than human authors and as having less mind than human authors. Participants also reported less religious commitment after reading the AI sermon.

The study authors tested a statistical model that proposed that information about the nature of the alleged author of the sermon influenced the perception of the mind of the sermon author. This determined the credibility assigned to the author, which, in turn, influenced participants’ religious commitment. Results showed that such a state of relationships between these factors is indeed possible.

Another model suggested that perceptions of the sermon author’s mental capacity influenced their likability and credibility, which then enhanced religious commitment. The data supported this relationship as well.

“Our research reveals how recent insights from psychological research on social learning and cultural transmission can help predict which occupations can be successfully automated, and which need remain human. Domains like religion, which rely on agents modeling their epistemic and moral commitment to belief systems and each other, may not be easily outsourced to robots,” the study authors concluded.

The study makes an important contribution to the scientific study of ways humans perceive AI agents. However, it also has limitations that need to be taken into account. Notably, study authors did not explore conditions under which robots could be made more credible or likeable as religious figures. It is possible that results do not apply to all robots and that modifying certain characteristics of robots or the context of their use might lead to different results.

The paper, “Exposure to Robot Preachers Undermines Religious Commitment,” was authored by Joshua Conrad Jackson, Kai Chi Yam, Pok Man Tang, and Ting Liu.

© PsyPost