Trump orders US government to cut ties with Anthropic Just Hours Before Deadline

We're talking apples and oranges.

Claude AI writes code, it's not a missile and it's not "in" a missile.

It only helps PEOPLE write the code that goes into a missile
Wholly disagree.

The AI is nothing but a tool to make our military, in this case, more efficient.
 
It's deeper than that and I've already explained why.

Research requires some leeway. It simply does. Look for example, at the history of biosafety. There was no convention till the early 70's. But all through the '60's, researchers all over the country we're working on SV-40, with no controls. Some even worked in their own garages at home. They already knew it was dangerous, they knew that since the 50's.

AI is simply advanced computer technology. The DoD classifies lots and lots of it. The underlying bits and bytes are public domain, it's all in the journals. But the application code is not, it's classified. It's the same with AI. The missile targeting stuff is exactly like the object identification and tracking stuff I'm working on over in the Science forum, only instead of running dogs they're tracking missiles. The difference is in the training, I train my network with dogs, they train theirs with missiles. I could go to them and say "hey, I know how to do that, can I have a job?" And the first thing they'll say is "where's your security clearance", and no matter how smart I am and how much they want to hire me, they'll make me go through channels and get one.

But imagine what would happen, if during the employment interview I said "as long as you don't use my work product to kill people". Y'know, get the mics, start recording the laugh track

Wholly disagree.

The AI is nothing but a tool to make our military, in this case, more efficient.

Yeah? Let's run through some scenarios.

How about, an AI that automatically shoots down any missile that launches, would you be for such a thing?

Imagine - no more missiles. Freedom from fear of missiles. Wouldn't that be a good thing?

But now, let's say the AI goes nuts and starts mistaking commercial airliners for missiles, and shoots them down. The result of that, is no more air traffic. No more tourists, no more FedEx, etc. The impact of that is much greater than fear, right?

So you know for sure the generals are going to install a kill switch so they can retain control over the AI. The problem today is AI takes weeks to train, and the resulting dataset is huge. Nothing can be fixed overnight. If there's a problem, you have two choices: shut down the AI till you can fix it, or let it run and take the consequences.
 
Anthropic, a company actively trying to compete with OpenAI, refused a major Pentagon contract over ethical concerns.

Pause there.

We’re talking about a smaller platform competing against the market leader and walking away from big government money. That’s not normal behavior.

Companies in second place don’t casually turn down large contracts. They especially don’t turn down government contracts. They need scale. They need compute. They need capital. They need relevance. Refusing that kind of deal is not impulsive. It’s calculated.

So what does that imply? The contract terms crossed a genuine internal red line.

Think about the incentives. If you’re chasing OpenAI, billions in public sector partnerships could accelerate you. Infrastructure. Credibility. Stability. Talent magnetism. Walking away means you believe the downside risk is bigger than the upside boost.

That’s both interesting and deeply concerning.

AI is no longer just a consumer product. It’s strategic infrastructure. Governments will want access. Corporations will want leverage. Militaries will want integration. This isn’t science fiction.

So if a company refuses integration under certain terms, that suggests one thing. Their internal governance really is drawing hard boundaries.

We’ve entered the phase where AI labs are making decisions that look like geopolitical doctrine. That’s new territory.

This isn’t about brand loyalty. It’s about watching how frontier AI companies behave when power knocks on the door. Refusing power is rare. Accepting power is predictable. Both choices carry implications.

This isn’t a PR stunt. This is a structural signal. It tells you something about the incentives and the risks.
 
Last edited:
The government’s response? Label Anthropic a supply chain risk. A designation normally reserved for foreign adversaries. That’s not subtle. That’s a warning shot.

Ethical boundaries in frontier AI are now being treated as political liabilities.

Anthropic isn’t being punished for incompetence. They’re being punished for drawing ethical lines that conflict with strategic government demands.

If a U.S. based AI company can be treated as a security risk for principled refusal, what does that tell you about the power dynamics shaping every major AI lab from here on?
 
Good. More of this.


"While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of the App Store.

Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.

Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a Friday night post on X that it had reached an agreement with the Department of Defense to deploy AI models in its classified network.

OpenAI's agreement has left some loyal ChatGPT users uneasy about OpenAI's ambitions, prompting online debates about the ethical implications — and some saying they were defecting to its rival Claude.

As of 6:38 p.m. ET on Saturday, Claude ranked number one among the most downloaded productivity apps on Apple's App Store."
 
Anthropic, a company actively trying to compete with OpenAI, refused a major Pentagon contract over ethical concerns.

Pause there.

We’re talking about a smaller platform competing against the market leader and walking away from big government money. That’s not normal behavior.

Companies in second place don’t casually turn down large contracts. They especially don’t turn down government contracts. They need scale. They need compute. They need capital. They need relevance. Refusing that kind of deal is not impulsive. It’s calculated.

So what does that imply? The contract terms crossed a genuine internal red line.

Think about the incentives. If you’re chasing OpenAI, billions in public sector partnerships could accelerate you. Infrastructure. Credibility. Stability. Talent magnetism. Walking away means you believe the downside risk is bigger than the upside boost.

That’s both interesting and deeply concerning.

AI is no longer just a consumer product. It’s strategic infrastructure. Governments will want access. Corporations will want leverage. Militaries will want integration. This isn’t science fiction.

So if a company refuses integration under certain terms, that suggests one thing. Their internal governance really is drawing hard boundaries.

We’ve entered the phase where AI labs are making decisions that look like geopolitical doctrine. That’s new territory.

This isn’t about brand loyalty. It’s about watching how frontier AI companies behave when power knocks on the door. Refusing power is rare. Accepting power is predictable. Both choices carry implications.

This isn’t a PR stunt. This is a structural signal. It tells you something about the incentives and the risks.
12 paragraphs and you didn’t tell us why Anthropic walked out? Could it be that you’re embarrassed to say what the government wanted anthropic to do?
 
12 paragraphs and you didn’t tell us why Anthropic walked out? Could it be that you’re embarrassed to say what the government wanted anthropic to do?
Anthropic walked out because the Pentagon contract would have forced them to compromise two core ethical guardrails.

Allow mass domestic surveillance of Americans.

Permit fully autonomous weapons deployment without meaningful human control.

These were red lines the company explicitly stated they could not accept. Refusing meant giving up billions of dollars, a strategic advantage, and a chance to directly compete with OpenAI in government adoption.

This was a calculated stand. They judged the long term risks to be bigger than the short term financial upside.

That’s what makes this unprecedented. A second place AI company, desperate for scale and visibility, literally walked away from government money. That’s a structural signal. Refusing power is always the anomaly.
 
2dwajogg16mg1.webp
 
Now, AI becoming political is not good.

It's bad enough we have companies like Google censoring information.

But AI is a national resource, it's like sending a man to the moon. All Americans should want it. It's not political, it benefits all of humanity.

We don't make cell phones political, right? TV sets? Toasters and refrigerators, and electric service and hot water?
 
Now, AI becoming political is not good.

It's bad enough we have companies like Google censoring information.

But AI is a national resource, it's like sending a man to the moon. All Americans should want it. It's not political, it benefits all of humanity.

We don't make cell phones political, right? TV sets? Toasters and refrigerators, and electric service and hot water?

It was inevitable that it would become political.

Anthropic is refusing to comply with government demands and is being punished for it. ChatGPT stepped in and said "We'll do it!"
 
9uv1e1owy3mg1.webp


Anthropic had a $200 million Pentagon contract. They were already the first AI company running models on classified military networks. They were in. Then Defense Secretary Hegseth issued a memo requiring all Pentagon AI contracts to include "any lawful use" language, meaning the government could use the AI however it wanted as long as it fit their interpretation of being technically legal.

Anthropic drew two specific lines...

No mass domestic surveillance of American citizens.

No fully autonomous weapons, meaning AI that selects and fires on targets with zero human involvement.

Those were their terms from the original contract. The Pentagon wanted them removed. Anthropic said no. Trump's response was to threaten civil and criminal consequences and direct every federal agency to immediately cease using their technology. He called them radical left woke nutjobs who have no idea what the real world is about.

Then OpenAI signed the contract within hours. Here's what I want you to actually think about...

Mass surveillance of American civilians. Does that sound like something you want an AI system doing without contractual restrictions? The Fourth Amendment exists for a reason. "Trust us, it's already illegal" is what every government says before it does the thing. There's also already precedent for the government spying on us.

Autonomous weapons with no human in the loop. A system that selects and eliminates targets without a human making the final call. On American soil potentially. Against American citizens potentially. The President is angry that a private company won't build that without restrictions.

Trump's statement doesn't engage with any of this. It's pure dominance framing. "That decision belongs to your Commander in Chief." Not "Here's why surveillance restrictions are wrong." Not "Here's why autonomous weapons need no oversight." Just "I decide. Submit or face consequences."

The company that refused is now being punished with the full power of the executive branch. The company that complied got the contract. You can think Anthropic is a left wing company. You can dislike their politics on other issues. But on these two specific things, surveillance of American citizens and autonomous weapons without human control, they drew a line that any consistent constitutionalist should respect.
 
View attachment 1225386

Anthropic had a $200 million Pentagon contract. They were already the first AI company running models on classified military networks. They were in. Then Defense Secretary Hegseth issued a memo requiring all Pentagon AI contracts to include "any lawful use" language, meaning the government could use the AI however it wanted as long as it fit their interpretation of being technically legal.

Anthropic drew two specific lines...

No mass domestic surveillance of American citizens.

No fully autonomous weapons, meaning AI that selects and fires on targets with zero human involvement.

Those were their terms from the original contract. The Pentagon wanted them removed. Anthropic said no. Trump's response was to threaten civil and criminal consequences and direct every federal agency to immediately cease using their technology. He called them radical left woke nutjobs who have no idea what the real world is about.

Then OpenAI signed the contract within hours. Here's what I want you to actually think about...

Mass surveillance of American civilians. Does that sound like something you want an AI system doing without contractual restrictions? The Fourth Amendment exists for a reason. "Trust us, it's already illegal" is what every government says before it does the thing. There's also already precedent for the government spying on us.

Autonomous weapons with no human in the loop. A system that selects and eliminates targets without a human making the final call. On American soil potentially. Against American citizens potentially. The President is angry that a private company won't build that without restrictions.

Trump's statement doesn't engage with any of this. It's pure dominance framing. "That decision belongs to your Commander in Chief." Not "Here's why surveillance restrictions are wrong." Not "Here's why autonomous weapons need no oversight." Just "I decide. Submit or face consequences."

The company that refused is now being punished with the full power of the executive branch. The company that complied got the contract. You can think Anthropic is a left wing company. You can dislike their politics on other issues. But on these two specific things, surveillance of American citizens and autonomous weapons without human control, they drew a line that any consistent constitutionalist should respect.

Where is Congress on all this?

The dysfunctional morons don't realize that autonomous weapons bypass their controls on war?
 
View attachment 1225386

Anthropic had a $200 million Pentagon contract. They were already the first AI company running models on classified military networks. They were in. Then Defense Secretary Hegseth issued a memo requiring all Pentagon AI contracts to include "any lawful use" language, meaning the government could use the AI however it wanted as long as it fit their interpretation of being technically legal.

Anthropic drew two specific lines...

No mass domestic surveillance of American citizens.

No fully autonomous weapons, meaning AI that selects and fires on targets with zero human involvement.

Those were their terms from the original contract. The Pentagon wanted them removed. Anthropic said no. Trump's response was to threaten civil and criminal consequences and direct every federal agency to immediately cease using their technology. He called them radical left woke nutjobs who have no idea what the real world is about.

Then OpenAI signed the contract within hours. Here's what I want you to actually think about...

Mass surveillance of American civilians. Does that sound like something you want an AI system doing without contractual restrictions? The Fourth Amendment exists for a reason. "Trust us, it's already illegal" is what every government says before it does the thing. There's also already precedent for the government spying on us.

Autonomous weapons with no human in the loop. A system that selects and eliminates targets without a human making the final call. On American soil potentially. Against American citizens potentially. The President is angry that a private company won't build that without restrictions.

Trump's statement doesn't engage with any of this. It's pure dominance framing. "That decision belongs to your Commander in Chief." Not "Here's why surveillance restrictions are wrong." Not "Here's why autonomous weapons need no oversight." Just "I decide. Submit or face consequences."

The company that refused is now being punished with the full power of the executive branch. The company that complied got the contract. You can think Anthropic is a left wing company. You can dislike their politics on other issues. But on these two specific things, surveillance of American citizens and autonomous weapons without human control, they drew a line that any consistent constitutionalist should respect.

I agree with the lines that Anthropic drew, but it is the prerogative of any administration to use whatever contractors they wish to.

Given this administration is populated by technocrats and politicians put there by technocrats who have no use for the Constitution, this should not surprise anyone.

If the DNC were in charge, I have no doubt they would be treating Musk and his companies the same.
 
15th post
. . and why didn't you just post this in one of these threads?


 
It was inevitable that it would become political.

Anthropic is refusing to comply with government demands and is being punished for it. ChatGPT stepped in and said "We'll do it!"

Anthropic is just being stupid.

The woke crowd hasn't learned yet? Every time they try to mix ideology with business they get screwed.

This is the job of Congress, not Anthropic. Corporations don't get to dictate US law.
 
. . and why didn't you just post this in one of these threads?


I have my own take that I wanted you all to actually interact with rather than having it buried in somebody else's thread. People make tons of threads about similar topics all the time.

I didn't repost the same news links or perspective. It's not a duplicate thread. This is just my own take on the issue.
 
Last edited:
Back
Top Bottom