LOS ANGELES (AP) — As concerns mount over artificial intelligence and its rapid integration into society, tech companies are increasingly turning to faith leaders for guidance on how to shape the technology — a surprising about-face on Silicon Valley's longstanding skepticism of organized religion.
Leaders from various religious groups met last week with representatives from companies including Anthropic and OpenAI for the inaugural "Faith-AI Covenant" roundtable in New York to discuss how best to infuse morality and ethics into the fast-developing technology. It was organized by the Geneva-based Interfaith Alliance for Safer Communities, which seeks to take on issues such as extremism, radicalization and human trafficking. The roundtable is expected to be the first of several around the globe, including in Beijing, Nairobi and Abu Dhabi.
Tech executives need to recognize their power — and their responsibility — to make the right decisions, said Baroness Joanna Shields, a key partner in the initiative. She worked as a tech executive with stints at Google and Facebook before pivoting to British politics.
"Regulation can't keep up with this," she said. "This dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they're building and they want to do it right — most of them."
The goal of this initiative, according to Shields, is an eventual "set of norms or principles" informed by different groups and faiths, from Christians to Sikhs to Buddhists, that companies will abide by.
Challenges lie ahead
Present at the meeting were a variety of faith groups, including representatives from the Hindu Temple Society of North America, the Baha'i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America and The Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church.
Before these companies initiated outreach, some traditions had issued their own ethical guidance on using AI. The Church of Jesus Christ of Latter-day Saints has given a qualified approval of the technology in its handbook. "AI cannot replace the gift of divine inspiration or the individual work required to receive it. However, AI can be a useful tool to enhance learning and teaching," it reads.
The Southern Baptist Convention, the largest Protestant denomination in the U.S., passed a resolution in 2023: "We must proactively engage and shape these emerging technologies rather than simply respond to the challenges of AI and other emerging technologies after they have already affected our churches and communities."
One challenge in creating a list of common principles is that global faiths, despite common ground, differ in their values and needs. "Religious communities see priorities differently," said Rabbi Diana Gerson, a roundtable participant and the associate executive vice president of the New York Board of Rabbis.
The partnership highlights a growing coalition between faith and tech, born out of an effort to create moral AI — a contested concept which begs questions about whether that is possible and what it means.
"We want Claude to do what a deeply and skillfully ethical person would do in Claude's position," Anthropic states in the public "Claude Constitution" written for its chatbot. That constitution was made with the help of a host of religious and ethics leaders.
In this burgeoning alliance, Anthropic has been the most assertive, at least publicly, in their efforts to court faith leaders. The move follows a public dispute earlier this year with the Pentagon over military use of artificial intelligence after Anthropic said it would restrict its technology from being used to develop autonomous weapons or for mass surveillance of Americans.
"There's some aspect of PR to it. The slogan was 'Move fast and break things.' And they broke too many things and too many people," said Brian Boyd, the U.S. faith liaison for the nonprofit Future of Life Institute. "There's both a moral obligation on the part of the companies that they're belatedly recognizing, as well as I think, for some members of the companies, an earnest questioning."
Some skepticism emerges
But other advocates for AI regulation and safety aren't so sure these efforts are genuine.
"At best it's a distraction. At worst it's diverting attention from things that really matter," said Rumman Chowdhury, the CEO of the nonprofit Humane Intelligence and the U.S. science envoy for AI under the Biden administration.
Chowdhury says she's not inclined to believe religion is the best place to help answer questions surrounding AI and ethics, but thinks she understands why companies are increasingly turning to it.
"I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics," she said. "They have very quickly realized that that's just not true. That's not real. So now they're looking at maybe religion as a way of dealing with the ambiguity of ethically gray situations."
It's unclear to what extent these notoriously opaque companies are translating what they hear from faith leaders into action — and what that action might look like. But some critics fear the conversation about creating ethical versions of the technology distract from broader conversations about AI and its role in society.
"Under the guise of, 'We're gonna build all this stuff. That's a given. And when we do build these things in these ways, how do we make sure that the end result is maybe good,'" said Dylan Baker, the lead research engineer at the Distributed AI Research Institute. "It's like, 'Wait, wait, wait. We need to question whether we want to be building these things at all."
___
Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.