Potential next level creepiness. It would be a good idea to seek moral clarity, and not dismissing religious leaders contribution, some of whom are uniquely qualified, but many others should not enter the discussion.
As it stands, there seems to be little actual restraint on how AI operates.
SAN FRANCISCO — Anthropic, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought help from a group rarely consulted in tech circles: Christian religious leaders.
The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”
“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.”
Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations.
The summit comes as the rapid spread of AI across society puts Silicon Valley leaders under pressure to account for the impact of their technology. Concern about job losses to automation has grown as more businesses embrace AI. OpenAI and Google have been sued by the families of people who died by suicide after intense and personal conversations with chatbots. (Both firms say they have safeguards for vulnerable users; The Washington Post has a content partnership with OpenAI.)
Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence.
The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military.
“A year ago, I would not have told you that Anthropic is a company that cares about religious ethics,” said Meghan Sullivan, a philosophy professor at the University of Notre Dame who participated in the gatherings. “That’s changed.”
WaPo
As it stands, there seems to be little actual restraint on how AI operates.
SAN FRANCISCO — Anthropic, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought help from a group rarely consulted in tech circles: Christian religious leaders.
The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.
Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”
“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.”
Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations.
The summit comes as the rapid spread of AI across society puts Silicon Valley leaders under pressure to account for the impact of their technology. Concern about job losses to automation has grown as more businesses embrace AI. OpenAI and Google have been sued by the families of people who died by suicide after intense and personal conversations with chatbots. (Both firms say they have safeguards for vulnerable users; The Washington Post has a content partnership with OpenAI.)
Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence.
The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military.
“A year ago, I would not have told you that Anthropic is a company that cares about religious ethics,” said Meghan Sullivan, a philosophy professor at the University of Notre Dame who participated in the gatherings. “That’s changed.”
WaPo