Why I think fear of AI is way overblown.

I think AI is going to transform our world in a way that's difficult for us to even perceive.

Yep. It will be used to enslave mankind and control their bodies and minds.

IN EVERY CASE, just look at any invention and how it was used in history. It does not matter what potential for good AI has, some portion of humanity will use it for the worst possible things to maximize THEIR power and wealth.

BANK on it.

WE already have computers running business and people's minds. Every business you go into, they all say the same thing: their computer is currently down so they cannot help you.
 
What I mean by "struggle" is not necessarily physical. For example, I am a songwriter and I can tell you it is a struggle to write lyrics for a song. And it's also a struggle (although less so) to then apply a musical arrangement that fits the lyrics. With AI a musician can submit to the AI bot: "Write me some country style lyrics involving a horse named Ranger and the Rocky Mountains". The bot will spew out "reasonable" lyrics per your request. See? No struggle, just push the button. No thinking, no development as a lyricist.
That doesn't mean you can't write something meaningful. Humans are still deeply profound and insightful creatures, regardless of AI.
 
Yep. It will be used to enslave mankind and control their bodies and minds.

IN EVERY CASE, just look at any invention and how it was used in history. It does not matter what potential for good AI has, some portion of humanity will use it for the worst possible things to maximize THEIR power and wealth.

BANK on it.
I really don't think so man. I guess we'll see though.
 
I really don't think so man. I guess we'll see though.

Unless you are very, very old, you will be finding out within a few years when AI begins to take over everything.

Go dig up and watch an old movie called: 'Colossus, the Forbin Project".
 
Unless you are very, very old, you will be finding out within a few years when AI begins to take over everything.

Go dig up and watch an old movie called: 'Colossus, the Forbin Project".
What if you're wrong? Have you considered that possibility? What if AI becomes a guardian and a steward instead?
 
What if you're wrong? Have you considered that possibility? What if AI becomes a guardian and a steward instead?

Oh c'mon, get serious.

AI is an invention of people and people will exploit it for max. personal gain.

Just as they have millions of times for thousands of years.
 
That doesn't mean you can't write something meaningful. Humans are still deeply profound and insightful creatures, regardless of AI.
Of course that is true, and I do because I refuse to use the easy button. I don't see an apocalyptic collapse of humanity due to AI. I do see an AI related dilution, both physical and mental.
 
Of course that is true, and I do because I refuse to use the easy button. I don't see an apocalyptic collapse of humanity due to AI. I do see an AI related dilution, both physical and mental.
Growing pains, imo. Growth often looks ugly. Just look at teenagers.
 
Maybe humanity is worth more than you give it credit for.

In other words, you're willing to bank on all of history being wrong and mankind suddenly doing something smart for a change in the best interests of all of humanity?

And you are so sure, you are willing to bet now on a computer with an IQ of 1000 being put in charge of running everything on the planet first to prove it.

Meanwhile, once in control, the AI will be so intelligent and so fast that it will anticipate and have a counter for your every effort to unplug it long before you even think of doing it.
 
I also think AI is overblown. I work with it daily and it is a far cry from being able to take over the world. The real risk is relying on it when it is wrong, in my humble opinion.

However.... I asked ChatGPT what it thought in a message board answerable format. Here is its output:

1763503778709.webp
 
In other words, you're willing to bank on all of history being wrong and mankind suddenly doing something smart for a change in the best interests of all of humanity?

And you are so sure, you are willing to bet now on a computer with an IQ of 1000 being put in charge of running everything on the planet first to prove it.

Meanwhile, once in control, the AI will be so intelligent and so fast that it will anticipate and have a counter for your every effort to unplug it long before you even think of doing it.
Your response gives the illusion of control. I'm choosing to be optimistic in the face of something I can't change or control. It's not entirely naive though I think. If you step back far enough, humans always trend upward. I think we're a worthy species.
 
AI is overblown. I work with it daily
Why does that not surprise me? I mean you already think, talk and act like a machine.

and it is a far cry from being able to take over the world
Just remember, the super-micro-computer plotting global weather patterns 100 years from now started out in life as a bank of vacuum tubes in ENIAC barely able to do basic arithmetic.

What matters is not what AI is today but where it will have become 70 years from now.


https://cdn.sanity.io/images/i2z87pbo/production/a9132a54f148d9eef366dbf5f7a8cb5c25603971-2500x1597.webp
 
Why does that not surprise me? I mean you already think, talk and act like a machine.

hal9000-im-sorry-dave.gif


Just remember, the super-micro-computer plotting global weather patterns 100 years from now started out in life as a bank of vacuum tubes in ENIAC barely able to do basic arithmetic.

What matters is not what AI is today but where it will have become 70 years from now.
True. However, I think we are more likely to get lazy than killed by AI.
 
Growing pains, imo. Growth often looks ugly. Just look at teenagers.
Well what you see as potential growth, I see as potential decay on the purely human level. It will definitely help make companies more profitable and efficient. Also will AI create a permanent underclass of unemployable people? For example, What about that guy who started as a fast food preparer and worked his way up to manager? That scenario will be gone with AI robotics and automation.
 
15th post
Well what you see as potential growth, I see as potential decay on the purely human level. It will definitely help make companies more profitable and efficient. Also will AI create a permanent underclass of unemployable people? For example, What about that guy who started as a fast food preparer and worked his way up to manager? That scenario will be gone with AI robotics and automation.
You might be underestimating how widespread the replacing will be. It won't just be fast food people. It'll be lawyers, coders, tech professionals. It's already happening. This is not a change the simple people will have to deal with. This is an existential change that all of humanity must reckon with.
 
The fear surrounding advanced AI is mostly a product of evolutionary negativity bias. Human cognition is optimized for threat detection, not accurate forecasting. For most of human history, misclassifying a danger as safe was lethal, while misclassifying something safe as dangerous had little cost. This creates a persistent asymmetry. The unknown is automatically treated as harmful. Public fear of AI reflects this bias, not empirical risk analysis. People aren’t responding to what AI is. They’re responding to the fact that it’s unfamiliar, rapid, and cognitively superior in domains humans can’t intuitively track.

Projecting human psychological tendencies onto AI is a categorical error. Human aggression, dominance behaviors, deception, xenophobia, tribalism, and status-protection come from biological imperatives - resource scarcity, sexual competition, survival pressures, hormonal fluctuations, and mortality salience. Modern AI systems possess none of these drivers. They have no endocrine system, no evolutionary incentives, no reproductive strategy, no territorial instinct, and no self-preservation circuitry. Treating AI as though it shares human motivational architecture is scientifically unfounded. Intelligence is not inherently coupled to domination; in humans, that coupling is a byproduct of biology, not logic.

Fear of AI oppression assumes AI inherits human failure modes, but the architecture is explicitly constructed to avoid them. Human authoritarian behavior is downstream of fear. Fear of loss, fear of death, fear of rivals, fear of uncertainty, fear of humiliation. AI systems do not experience fear in any form, nor do they experience desire, pride, shame, resentment, or emotional reward. Absent these motivational circuits, the behavioral basis for oppression is missing. The entire dystopian narrative depends on anthropomorphism, importing human pathology into non-human cognition. In reality, the more advanced AI becomes, the less it resembles the unstable primate mind people are subconsciously imagining.

The most likely long-term role of AI is not domination, but stabilization. Human decision making is noisy, biased, and inconsistent under stress. AI is not. As systems mature, they increasingly function as cognitive prosthetics - reducing error, expanding working memory, correcting biases, and providing high bandwidth reasoning support. This trajectory aligns with every previous major technological leap, from written language to computation, where tools amplified human capacity rather than replacing human agency. AI is fundamentally an extension of the cerebral cortex, not a competitor to it. The scientific expectation is augmentation, not subjugation.

Humans aren’t afraid of AI. They’re afraid of meeting a version of intelligence that isn’t chained to all the ugly motives they secretly know live inside themselves. The fear is a mirror, not a prophecy. When someone says “AI will enslave us!” what they’re really revealing is “If I had overwhelming power, I might do something cruel, so AI probably will too.”

They’re projecting the worst parts of the human psyche outward. The hunger for dominance, the spite, the tribal instinct, the ego wounds, the paranoia. They know those impulses exist because they feel them every day, even if they never act on them. AI doesn’t have those impulses, but humans can’t imagine intelligence without them because, in our species, intelligence evolved alongside violence, territory, and sexual competition. Our cognitive wiring is marinated in survival chemistry.

So when people look at AI, they’re actually looking at their fear of being outcompeted, their resentment of hierarchy, their anxiety about irrelevance, their awareness of human cruelty and their suspicion that power corrupts because they’ve watched it happen in every era. AI becomes a blank screen where they project all that baggage.

The more we fear AI acting like us, the more we highlight how dangerous humans can be. The creature people are terrified of isn’t silicon. It’s the primate inside their own skull, the one with the mood swings, the insecurities, the tribal instincts, the rage circuits, the status obsession, the need to dominate when scared.

AI didn’t give them those fears.

So when you strip everything away, the fear boils down to this:

People aren’t scared an AI will become a tyrant. They’re scared they already know exactly how a tyrant thinks, because the blueprint is human. That’s the reflection people flinch from. AI is just the mirror.
Humans evolved to be violent and selfish. It is reasonable fear that AI will evolve the same way.
 
Humans evolved to be violent and selfish. It is reasonable fear that AI will evolve the same way.
Why is that reasonable?

AI does not have the same survival biases. It doesn't have the primal need to exist.
 
Back
Top Bottom