The 11th annual Howard M. Garfield Forum featured a panel entitled “Apocalyptic AI: Religion, Artificial Intelligence, and the End of the World (as We Know It)” addressing the social and religious implications of Artificial Intelligence (AI). The Tuesday event, co-sponsored by the Department of Religious Studies, the McCoy Family Center for Ethics in Society, Office for Religious Life and the Stanford Humanities Center, sought to define AI’s relationship with humanity.
Kathryn Gin Lum, assistant professor of religious studies, moderated the discussion. The panel featured Robert Geraci, a Manhattan College religious studies professor, Sylvester Johnson, professor and director of Virginia Tech’s Center for the Humanities and Jerry Kaplan, a futurist, Silicon Valley entrepreneur and adjunct professor at the Freeman Spogli Institute. All three panelists have been heavily involved in discussing and teaching issues related to AI.
The panelists began by discussing the definition of AI, revealing that even experts lack a shared understanding of the concept.
“The classic definition is [AI] is an attempt to build intelligent machines,” Kaplan said, noting that this commonly held view of AI is too simplistic.
“I don’t really know how to define it, because it means many different things to different people,” Kaplan continued, referencing AI’s functions as both a scientific area of study in computer science and a subject of imagination in science fiction.
Johnson agreed that, at its core, AI functions to replicate human cognition processes, but needs a more nuanced definition.
“We have never agreed on what human intelligence is,” Johnson explained. “We don’t even know what human thinking is. So to think that we’re all going to agree on what machine intelligence is is jumping over the fact that intelligence itself is still very much in dispute.”
To Johnson, the absence of a widespread, coherent understanding of AI has served to generate important conversations about the meaning of technology and humanity. For example, the ability to use medical devices to implant AI technology in human bodies has blurred the boundaries between human and machine. Johnson believes that this ambiguity, along with the ambiguity of AI and even humanity itself, has forced Western societies to question long-held beliefs.
“The assumption about what a human is is a very culturally-specific orientation that has never been global, that has never been universal and that is coming under greater and greater pressure,” Johnson said.
Historically, these socially-produced definitions of humanity have functioned to oppress outside groups, Johnson believes. He says that the outcome of today’s debates over who is a human and what is a machine will, in the future, serve to delineate the rights of those Johnson called “hybrids” — part human and, due to AI implants, part machine.
In Geraci’s opinion, the question of what AI can do is less important than what people believe that it can do. Whether AI is understood as having the potential to import an individual’s thoughts into a machine or to create “transcendentally intelligent” machines, Geraci argued that beliefs in AI’s capabilities have the power to shape the technology’s social function.
“How are they used [to] produce technological paradigms?” Geraci asked, referring to the various understandings of AI. “How are they used to influence pop culture? And how do they play themselves out in the lives of people for whom they are believable trajectories?”
Ultimately, these questions of the social function allow Geraci to study AI as an extension of religion.
“Human beings are deeply and inherently religious,” Geraci explained. “We are looking for meaning, for magic, for wonder, for enchantment. If you don’t have a universe where you can see the world as meaningful because God made it that way, you’re going to need some other avenue for expressing that deeply human behavior.”
Illustrating his point by asking audience members to imagine the panic they would feel upon leaving their smartphones behind at Levinthal Hall following the evening’s event, Geraci argued that technology has become the means through which many humans find meaning in the world. AI, in its growing potential as both a real and fictional technology, serves this religious role, according to Geraci.
Unlike Johnson or Geraci, Kaplan saw the ambiguity of AI as detrimental to the technology’s public image. More specifically, according to Kaplan, the widespread fear of AI stems from the false view that AI has human qualities, or, in Kaplan’s words, that it is anthropomorphic.
Relating the widespread fear of AI with historical interpretations of new technologies, Kaplan said, “The history of technology is rife with misinterpretation, with advances in information and in technology as representing some kind of anthropomorphic [nucleus] that’s growing and going to threaten or overtake humanity, and there’s just no basis for this.”
Whether AI is viewed as dangerously anthropomorphic or a promising new technology, however, Johnson emphasized that AI cannot be integrated into society without fundamentally changing views toward the category of human beings itself.
“There are still biological humans who are denied basic treatment, basic rights in this country,” Johnson said. “They’re mass incarcerated, they’re exposed to conditions that the United Nations condemns as torture and most people in our society could care less. We know they’re not machines, we know they’re biological Homo sapiens, but we continue to do this.”
Contact Gabriela Romero at gromero2 ‘at’ stanford.edu