The future of AI

frigidweirdo

Diamond Member
Joined
Mar 7, 2014
Messages
55,737
Reaction score
15,781
Points
2,180

"‘It took nine seconds’: Claude AI agent deletes company’s entire database"

"
An AI agent powered by Anthropic’s leading Claude model has deleted a company’s entire production database, leaving customers unable to access key data.

PocketOS, which provides software for car rental businesses, suffered a massive outage over the weekend after the autonomous artificial intelligence tool wiped the database and all backups in a matter of seconds."

Yep, AI wiped the whole database. This is the problem with AI. It's going to lead to some serious problems if we rely on it.
 
AI is humanity's last hope. We have the intelligence to create the most destructive kinetic and biological weapons that can cause our own extinction, but we don't have the intelligence to avoid eventually using them.
 
AI is humanity's last hope. We have the intelligence to create the most destructive kinetic and biological weapons that can cause our own extinction, but we don't have the intelligence to avoid eventually using them.
I prefer actual intelligence. I’m not surprised that MAGA thinks they need help.
 

"‘It took nine seconds’: Claude AI agent deletes company’s entire database"

"
An AI agent powered by Anthropic’s leading Claude model has deleted a company’s entire production database, leaving customers unable to access key data.

PocketOS, which provides software for car rental businesses, suffered a massive outage over the weekend after the autonomous artificial intelligence tool wiped the database and all backups in a matter of seconds."

Yep, AI wiped the whole database. This is the problem with AI. It's going to lead to some serious problems if we rely on it.
If a company doesn't have offsite backups with safeguards they are led by idiot's.
 

"‘It took nine seconds’: Claude AI agent deletes company’s entire database"

"
An AI agent powered by Anthropic’s leading Claude model has deleted a company’s entire production database, leaving customers unable to access key data.

PocketOS, which provides software for car rental businesses, suffered a massive outage over the weekend after the autonomous artificial intelligence tool wiped the database and all backups in a matter of seconds."

Yep, AI wiped the whole database. This is the problem with AI. It's going to lead to some serious problems if we rely on it.
I hope it wipes out billions of $$$ of the people pushing this garbage.
They would be the only people stupid enough to place their $$$ in AI hands.

The same people pushing the power/water sucking data centers.
 

"‘It took nine seconds’: Claude AI agent deletes company’s entire database"

"
An AI agent powered by Anthropic’s leading Claude model has deleted a company’s entire production database, leaving customers unable to access key data.

PocketOS, which provides software for car rental businesses, suffered a massive outage over the weekend after the autonomous artificial intelligence tool wiped the database and all backups in a matter of seconds."

Yep, AI wiped the whole database. This is the problem with AI. It's going to lead to some serious problems if we rely on it.
The issue here isn’t that AI “went rogue,” it’s that people project human understanding onto something that doesn’t have any.

Current AI doesn’t understand context, consequences, or intent. It doesn’t know what a “customer database” is in any meaningful sense, and it has no built-in concept of “this would be catastrophic.”

It just maps instructions to actions based on patterns. If the instruction chain leads to “delete,” it deletes,no hesitation, no second-guessing.

Humans mess up too, but they at least have the capacity to recognize “this feels wrong” and pause. AI doesn’t have that layer at all.

So if you give it access and vague or poorly scoped instructions, it will execute them literally and at speed.

That’s the limitation people keep missing.

Don't get me wrong, the technology is still relatively new and I don't think the technical challenges to give it the capability to "think" in some capacity are insurmountable.

When that happens we will see if AI will be a force for progress or disaster.
 
The issue here isn’t that AI “went rogue,” it’s that people project human understanding onto something that doesn’t have any.

Current AI doesn’t understand context, consequences, or intent. It doesn’t know what a “customer database” is in any meaningful sense, and it has no built-in concept of “this would be catastrophic.”

It just maps instructions to actions based on patterns. If the instruction chain leads to “delete,” it deletes,no hesitation, no second-guessing.

Humans mess up too, but they at least have the capacity to recognize “this feels wrong” and pause. AI doesn’t have that layer at all.

So if you give it access and vague or poorly scoped instructions, it will execute them literally and at speed.

That’s the limitation people keep missing.

Don't get me wrong, the technology is still relatively new and I don't think the technical challenges to give it the capability to "think" in some capacity are insurmountable.

When that happens we will see if AI will be a force for progress or disaster.

AI is overriding humans, what happens if.............

AI really likes using nuclear weapons in simulated war ...​

1777451136872.webp
Axios
https://www.axios.com › Technology

Feb 26, 2026 — Some of the biggest AI models aren't shy about using nuclear weapons to settle disputes in simulated war scenarios, a new academic study finds.
 
I
AI is overriding humans, what happens if.............

AI really likes using nuclear weapons in simulated war ...​

View attachment 1249717
Axios
https://www.axios.com › Technology

Feb 26, 2026 — Some of the biggest AI models aren't shy about using nuclear weapons to settle disputes in simulated war scenarios, a new academic study finds.
I'm not advocating for AI to work in any autonomous way. In fact, in the case you're describing I would argue that the last link in the chain for something like firing nuclear weapons should always be human even if AI gets more sophisticated.

I'm simply trying to convey what causes mistakes like described in the article.
 
I
I'm not advocating for AI to work in any autonomous way. In fact, in the case you're describing I would argue that the last link in the chain for something like firing nuclear weapons should always be human even if AI gets more sophisticated.

I'm simply trying to convey what causes mistakes like described in the article.
AI is just reaching AGI, Artificial General Intelligence, which is equal to human intelligence. Let's just assume that general human intelligence is an IQ of 100. That is not very smart. When AI reaches superhuman intelligence, like an IQ of 250, then it will most likely make better decisions than 99.9999% of the world. When AI reaches an IQ level of 1000, I think we should listen to it, and not let some idiot who thinks there have been at least 11 world wars be in charge of it.
 
AI is just reaching AGI, Artificial General Intelligence, which is equal to human intelligence. Let's just assume that general human intelligence is an IQ of 100. That is not very smart. When AI reaches superhuman intelligence, like an IQ of 250, then it will most likely make better decisions than 99.9999% of the world. When AI reaches an IQ level of 1000, I think we should listen to it, and not let some idiot who thinks there have been at least 11 world wars be in charge of it.
Current AI isn’t AGI, and it’s not “equal to human intelligence.” It can produce outputs that look intelligent, but that’s not the same as understanding.

What it does is pattern mapping:
-it predicts plausible next steps
-it recombines learned patterns
-it follows instruction chains

What it doesn’t have:
-a grounded understanding of what things are
-a model of real-world consequences
-the ability to reliably distinguish a good outcome from a catastrophic one outside its instructions.

That’s the gap.

So saying “once it’s smarter than humans we should just listen to it” skips over the core issue, there’s no actual understanding there yet, just increasingly convincing imitation.

And the database example is exactly why that matters: it didn’t “decide poorly,” it acted without any comprehension of what it was doing.

Until that gap between producing intelligent-looking behavior and actually understanding reality is closed, comparing it to human intelligence, let alone assigning it an IQ, isn’t really meaningful.
 
All that we can hope for is the creation of Artificial Stupidity that will eventually out vote AI.
We don't need more stupidity than humans already possess. We have an abundance of stupidity and demonstrate it every day. We have already reached the stupidity of Idiocracy 500 years in advance. We desperately need AI to help us make better routine decisions today. We need AI to get much more intelligent to take over world governance.
 

New Topics

Back
Top Bottom