The future of AI

Current AI isn’t AGI, and it’s not “equal to human intelligence.” It can produce outputs that look intelligent, but that’s not the same as understanding.

What it does is pattern mapping:
-it predicts plausible next steps
-it recombines learned patterns
-it follows instruction chains

What it doesn’t have:
-a grounded understanding of what things are
-a model of real-world consequences
-the ability to reliably distinguish a good outcome from a catastrophic one outside its instructions.

That’s the gap.

So saying “once it’s smarter than humans we should just listen to it” skips over the core issue, there’s no actual understanding there yet, just increasingly convincing imitation.

And the database example is exactly why that matters: it didn’t “decide poorly,” it acted without any comprehension of what it was doing.

Until that gap between producing intelligent-looking behavior and actually understanding reality is closed, comparing it to human intelligence, let alone assigning it an IQ, isn’t really meaningful.
But dropping/deleting a database is a valid command.
 
But dropping/deleting a database is a valid command.
Yes it is. It's one of those adminstrator level commands.

In this case they gave that level of command to the AI and paid the price.

Just the other day, I read an article of a guy who was letting claude automate a whole lot of processes on his computer, and he was amazed at what it all did.

The problem with using it in that way, is that it can also lead to what happened here.
 
I agree we’re just scratching the surface, and AI will absolutely surpass humans in many narrow tasks. That’s not really in dispute.

But that’s not what you’re arguing. You’re talking about outsourcing societal decision-making to AI, which is a completely different category of problem.

Driving a car is a constrained task:
-clear objective (get from A to B)
-defined environment
-measurable success/failure

And even there, after billions invested, it still isn’t flawless.

Now compare that to governing a society:
-conflicting values
-ambiguous goals
-moral trade-offs
-incomplete and biased information

That’s not just “a harder version” of the same problem. It’s a different kind of problem entirely.

And this is where the limitation matters:
Current AI doesn’t actually understand what it’s doing. It can rank patterns, optimize outputs, and simulate reasoning, but it has no grounded sense of truth, meaning, or consequence.

So even if it gets better at filtering “garbage” data, that doesn’t magically give it:
-judgment
-accountability
-or an understanding of human values

AI is a powerful tool, probably the most powerful we’ve built. But treating it as a decision-maker for humanity assumes a level of understanding that simply isn’t there.

And the real risk isn’t that AI will suddenly become evil.

It’s that people will overestimate what it actually is, and hand over decisions it fundamentally isn’t equipped to make.

Leading to all kinds of unforeseen and unknowable consequences.
I think you are only looking at one side of the equation. The other side is to let people like Trump, Harris, Xi Jinping, Netanyahu, AOC and Ilhan Omar continue to make societal decisions? Our politicians aren't getting more intelligent, their getting more corrupt. Today I wouldn't turn over global governance to AI, but I would like to see AI being trained for that outcome.
 
We don't need more stupidity than humans already possess. We have an abundance of stupidity and demonstrate it every day. We have already reached the stupidity of Idiocracy 500 years in advance. We desperately need AI to help us make better routine decisions today. We need AI to get much more intelligent to take over world governance.
The stupidity may seem ramped up, but that’s the result of a brainless president and his sycophantic minions. Hopefully that can be remedied in the next couple of elections, as long as the administration fascists don’t manage to completely take over.
 
I think you are only looking at one side of the equation. The other side is to let people like Trump, Harris, Xi Jinping, Netanyahu, AOC and Ilhan Omar continue to make societal decisions? Our politicians aren't getting more intelligent, their getting more corrupt. Today I wouldn't turn over global governance to AI, but I would like to see AI being trained for that outcome.
My point is you can’t really “train” AI for that outcome in the way you are implying. At least not in its current form, and likely not ever.

You can feed it data, feedback, objectives, whatever, but you can’t train it to understand what any of it actually means.

It doesn’t feel consequences. It doesn’t understand harm or stability or trust. Those aren’t just missing parameters you can bolt on later, they’re fundamentally different from what the system is doing in the first place.

So what you end up with is optimisation over patterns, not judgment grounded in reality.

That’s why I'd argue what you're advocating is misguided and probably dangerous. Yes, politicians are flawed, corrupt, irrational, whatever you want to say.

But they’re still operating inside reality. They have instincts for self-preservation, they understand consequences in a real-world sense, and they exist inside systems that can punish them when they screw up.

AI doesn’t have any of that. It can produce answers, even very convincing ones, but there’s nothing behind it that “cares” if it’s right or wrong.

So even if you don’t trust people like AOC or whoever else, you’re still comparing flawed human judgment inside reality to something that doesn’t actually have judgment at all, and can not be trained to have it.

That’s the gap.
 

"‘It took nine seconds’: Claude AI agent deletes company’s entire database"

"
An AI agent powered by Anthropic’s leading Claude model has deleted a company’s entire production database, leaving customers unable to access key data.

PocketOS, which provides software for car rental businesses, suffered a massive outage over the weekend after the autonomous artificial intelligence tool wiped the database and all backups in a matter of seconds."

Yep, AI wiped the whole database. This is the problem with AI. It's going to lead to some serious problems if we rely on it.
Yep. If you don't put up the guardrails, there's a good chance you'll be off the road in short order.
 
If a company doesn't have offsite backups with safeguards they are led by idiot's.
Maybe. And AI is going to lead to more idiots.

Kids now are being brought up by Tiktok, and that's destroying their brains, or preventing them from developing properly in the first place.

We're headed for that kind of society.
 
The issue here isn’t that AI “went rogue,” it’s that people project human understanding onto something that doesn’t have any.

Current AI doesn’t understand context, consequences, or intent. It doesn’t know what a “customer database” is in any meaningful sense, and it has no built-in concept of “this would be catastrophic.”

It just maps instructions to actions based on patterns. If the instruction chain leads to “delete,” it deletes,no hesitation, no second-guessing.

Humans mess up too, but they at least have the capacity to recognize “this feels wrong” and pause. AI doesn’t have that layer at all.

So if you give it access and vague or poorly scoped instructions, it will execute them literally and at speed.

That’s the limitation people keep missing.

Don't get me wrong, the technology is still relatively new and I don't think the technical challenges to give it the capability to "think" in some capacity are insurmountable.

When that happens we will see if AI will be a force for progress or disaster.
Yep. AI will make mistakes because it doesn't know what it's doing, and humans will assume everything will work well because it's AI
 
Back
Top Bottom