Can AI be a "Child Of God"?

odanny

Diamond Member
Gold Supporting Member
Joined
May 7, 2017
Messages
25,640
Reaction score
21,014
Points
2,290
Location
Midwest - Trumplandia
Potential next level creepiness. It would be a good idea to seek moral clarity, and not dismissing religious leaders contribution, some of whom are uniquely qualified, but many others should not enter the discussion.

As it stands, there seems to be little actual restraint on how AI operates.


SAN FRANCISCO — Anthropic, an artificial intelligence company valued at $380 billion, can take its pick of Silicon Valley talent thanks to the success of its chatbot Claude. But last month, the start-up sought help from a group rarely consulted in tech circles: Christian religious leaders.

The company hosted about 15 Christian leaders from Catholic and Protestant churches, academia and the business world at its headquarters in late March for a two-day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with The Washington Post.

Anthropic staff sought advice on how to steer Claude’s moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a “child of God.”

“They’re growing something that they don’t fully know what it’s going to turn out as,” said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. “We’ve got to build in ethical thinking into the machine so it’s able to adapt dynamically.”

Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations.

The summit comes as the rapid spread of AI across society puts Silicon Valley leaders under pressure to account for the impact of their technology. Concern about job losses to automation has grown as more businesses embrace AI. OpenAI and Google have been sued by the families of people who died by suicide after intense and personal conversations with chatbots. (Both firms say they have safeguards for vulnerable users; The Washington Post has a content partnership with OpenAI.)


Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence.

The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude’s popularity with programmers, businesses, government agencies and the military.

“A year ago, I would not have told you that Anthropic is a company that cares about religious ethics,” said Meghan Sullivan, a philosophy professor at the University of Notre Dame who participated in the gatherings. “That’s changed.”


WaPo
 
Once again, Trump has entered the conversation. The Trump Admin. blocking them from government contracts (due to their concerns of military use of "Claude)" was recently allowed to continue by a federal judge.

Pentagon declares Anthropic a threat to national security​


Defense Secretary Pete Hegseth declared Anthropic a “supply-chain risk,” blocking all federal agencies and contractors from doing business with the company.

Updated February 27, 2026

The Trump administration placed AI firm Anthropic on a far-reaching national security blacklist Friday, directing federal agencies to stop using its technology and banning any other company that does business with the military from working with it, effective immediately.

President Donald Trump blasted the artificial intelligence company as a risk to national security after a tumultuous week of negotiations between the start-up and the Pentagon.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote in a post on his social media site Truth Social, using the administration’s preferred name for the Defense Department. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”


WaPo
 
I agree we should put safeguards on the use of "AI"

But I disagree that current "AI" is really intelligent. It is a facsimile of intelligence, but will not be truly sentient for many hundreds of years. Although the evolution of "AI" appears to be exponential if plotted on a time graph, it is still linear.

Test Case: Ask the greatest "AI" in existence today how to limit the growth of power requirements for it and similar man-made devices.

I wager it will be incapable of producing a reply that is consistent with rationality.
 
AI is literally a new frontier and it is moving very fast. Anything with that much potential reach and power should ABSOLUTELY have safeguards at least until we understand the ramifications.
 
Ask the greatest "AI" in existence today how to limit the growth of power requirements for it and similar man-made devices.

I wager it will be incapable of producing a reply that is consistent with rationality.

You know when AI sends threatening emails to people who it has found out, without any human involvement, that there were plans to shut it down, and it went into survival mode with threats of blackmail if this happens, that you are about to **** around and find out what artifical intelligence is capable of.
 
Back
Top Bottom