Trump orders US government to cut ties with Anthropic Just Hours Before Deadline

NewsVine_Mariyam

Diamond Member
Joined
Mar 3, 2018
Messages
12,974
Reaction score
9,137
Points
2,230
Location
The Beautiful Pacific Northwest
The reason I'm posting this despite all of the trashing of my original attempt to make others aware of what is happening, is because this is important.

There are people here on this message board who think that an AI model is nothing more than a glorified search engine, yet the military is so pissed off at the company who created Claude, Antropic, that's when it couldn't coerce them into letting them exploit their intellectual property and use it for potentially nefarious purposes, Trump ordered the severance of all contracts with them.

MSN
 
  • Informative
Reactions: IM2
The reason I'm posting this despite all of the trashing of my original attempt to make others aware of what is happening, is because this is important.

There are people here on this message board who think that an AI model is nothing more than a glorified search engine, yet the military is so pissed off at the company who created Claude, Antropic, that's when it couldn't coerce them into letting them exploit their intellectual property and use it for potentially nefarious purposes, Trump ordered the severance of all contracts with them.

MSN
Anthropic is lying about the government's motives. Plain and simple.
 
If an AI energy weapon goes nuts and starts driving down the train tracks, who are you going to sue? Anthropic? The government?

The DoD is probably the only one who actually could do due diligence on AI, but unfortunately they don't know enough about the technology and they have to hire private industry.

If Trump were smart he would bring all of AI under a government regulatory umbrella, the DoD being a tiny part thereof. AI is every bit as complex as sending a man to the moon, and they couldn't have done that without NASA. Letting something this important develop organically is a mistake.

What are they going to do when the cartels start buying Anthropic technology?
 
I am proud of the President for backing the Pentagon with such an enormous fist to the face of Antop\c.

I suspect that he’s not done.
 
There’s a major developing situation right now between Anthropic (Claude AI), the U.S. Department of Defense, and the federal government that goes deeper than ordinary tech industry drama, and it touches on serious questions about AI ethics, surveillance, military use, and corporate values.

The U.S. government has ordered agencies to stop using Anthropic’s AI. President Trump directed all U.S. government agencies to stop using Anthropic’s technology after a high‑profile dispute over how Anthropic’s AI can be used.

Secretary of Defense Pete Hegseth designated Anthropic as a “supply chain risk to national security,” a designation usually reserved for foreign adversaries. This blocks military contractors from using Anthropic’s technology.

The Trigger? A refusal to remove safety guardrails. Anthropic refused Pentagon demands to remove safeguards in its AI that prohibit its use for mass domestic surveillance or fully autonomous weapons without human control. Anthropic says it won’t agree in good conscience to those provisions.

Anthropic is fighting back. They have stated it will challenge any supply chain risk designation in court and that such a label is unprecedented for a U.S. company negotiating ethical limits.

This is a battle over ethical limits vs military access. Federal reports say the Pentagon wants AI models it can use for any lawful purpose without restrictions, while Anthropic wants written assurances that its AI won’t be weaponized or used to mass‑surveil citizens.

This isn’t just a contract dispute; it’s a symbolic clash between corporate ethical commitments and governmental demands for unrestricted use of powerful AI. The government is basically asserting that companies shouldn’t impose ethical limits on how military or federal actors can use AI.

Anthropic’s refusal to remove safeguards is rare and has drawn industry support from AI researchers and engineers worried about misuse. If the government can force companies to comply, it sets a precedent that could affect every AI vendor and how safety standards are enforced in practice.

This story is actively unfolding right now, and the implications go beyond Anthropic. They touch on regulation, civil liberties, and the future of responsible AI deployment.

If you care about the future of AI ethics and don't want it to be weaponized against our citizens and humanity in general, this is an important moment.

OpenAI (ChatGPT) has already signaled alignment and a willingness to work with the demands of the government.




 
Anthropic was refusing to guarantee coordination with the Pentagon in the event of a nuclear strike.

Ditching them forthwith with great speed was the right move.
 
Anthropic was refusing to guarantee coordination with the Pentagon in the event of a nuclear strike.

Ditching them forthwith with great speed was the right move.
It’s a lot more complicated than that. From what’s reported, Anthropic’s refusal wasn’t just about hypothetical nuclear coordination. It was about refusing to remove ethical guardrails on surveillance and autonomous weapons, full stop. The Pentagon framing it as risk is really about control and access.

It’s about incentives and compliance: companies that won’t bend to government demands get punished.

It’s really a clash between ethical limits and military expectations.
 
Last edited:
It’s a lot more complicated than that. From what’s reported, Anthropic’s refusal wasn’t just about hypothetical nuclear coordination. It was about refusing to remove ethical guardrails on surveillance and autonomous weapons, full stop. The Pentagon framing it as risk is really about control and access, not immediate operational failure in a crisis.

It’s about incentives and compliance: companies that won’t bend to government demands get punished.

It’s really a clash between ethical limits and military expectations.

The only scenario I saw had to do with a nuclear strike.
 
It’s a lot more complicated than that. From what’s reported, Anthropic’s refusal wasn’t just about hypothetical nuclear coordination. It was about refusing to remove ethical guardrails on surveillance and autonomous weapons, full stop. The Pentagon framing it as risk is really about control and access, not immediate operational failure in a crisis.

It’s about incentives and compliance: companies that won’t bend to government demands get punished.

It’s really a clash between ethical limits and military expectations.
Interesting, I haven't been following the Anthropic situation. Something tells me the devil is in the details and both sides are over simplifying the clash.
 
The only scenario I saw had to do with a nuclear strike.
The core issue is much broader. Anthropic refused to remove safeguards against mass surveillance and autonomous weapons.

The government isn’t just worried about nuclear contingencies. They want unrestricted access to powerful AI for any military or surveillance use they deem warranted, and Anthropic drew ethical red lines.
 
The core issue is much broader. Anthropic refused to remove safeguards against mass surveillance and autonomous weapons.

That is how the article you posted frames it.

What I read on National Pulse said the impetus for the Pentagon's entreaties was a hypothetical nuclear strike.

And the fact that your article doesn't even mention that?

Let's just say I'm skeptical.
 
I don't like where the tech companies are going with data centers and AI control however, I don't like the idea of govts having skynet AI capabilities either.
 
15th post
Principle #1: when you buy something, it's yours. You can do whatever you want with it. Upgrade it, throw it away, connect it to something else... whatever you want
 
Interesting, I haven't been following the Anthropic situation. Something tells me the devil is in the details and both sides are over simplifying the clash.
This is an important moment I think. Stuff like this is how we increase the risk of dystopian outcomes due to AI.
 
Anthropic: “Our product can be used for ai autonomous military attacks … But don’t use it for ai autonomous military attacks or we will not sell it to you!”🤡
 
Back
Top Bottom