MarathonMike
Diamond Member
Like using AI to manipulate videos that invoke rage and hatred towards other people.history has proven that sometime, somewhere, people will try, no matter how good AI might be, to find the worst uses for it
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Like using AI to manipulate videos that invoke rage and hatred towards other people.history has proven that sometime, somewhere, people will try, no matter how good AI might be, to find the worst uses for it
I'm referring the great problems of humankind; disease, poverty, crime, war, etc.
I believe AI is already being used in medical diagnosis. Poverty? Perhaps analyzing food production and finding more efficient means of distribution.I'm referring the great problems of humankind; disease, poverty, crime, war, etc.
AI will idealize things that cannot be idealized.AI will devalue our current way of life. Humans will no longer need to be productive members of society when Ai does all the work.
Besides that AI will share the biases of it's programmers.
I'm afraid that AI will be bumping heads with human nature. That should be interesting.I believe AI is already being used in medical diagnosis. Poverty? Perhaps analyzing food production and finding more efficient means of distribution.
AI will devalue our current way of life.
Oh it's going to be interesting, that's for sure. It already is.I'm afraid that AI will be bumping heads with human nature. That should be interesting.
Norway offers universal health and social insurance (National Insurance Scheme) with a history of guaranteeing basic income in case of lost income due to illness, a precursor to broader UBI ideas.What countries have universal health care and a guaranteed annual income?
The fear surrounding advanced AI is mostly a product of evolutionary negativity bias. Human cognition is optimized for threat detection, not accurate forecasting. For most of human history, misclassifying a danger as safe was lethal, while misclassifying something safe as dangerous had little cost. This creates a persistent asymmetry. The unknown is automatically treated as harmful. Public fear of AI reflects this bias, not empirical risk analysis. People aren’t responding to what AI is. They’re responding to the fact that it’s unfamiliar, rapid, and cognitively superior in domains humans can’t intuitively track.
Projecting human psychological tendencies onto AI is a categorical error. Human aggression, dominance behaviors, deception, xenophobia, tribalism, and status-protection come from biological imperatives - resource scarcity, sexual competition, survival pressures, hormonal fluctuations, and mortality salience. Modern AI systems possess none of these drivers. They have no endocrine system, no evolutionary incentives, no reproductive strategy, no territorial instinct, and no self-preservation circuitry. Treating AI as though it shares human motivational architecture is scientifically unfounded. Intelligence is not inherently coupled to domination; in humans, that coupling is a byproduct of biology, not logic.
Fear of AI oppression assumes AI inherits human failure modes, but the architecture is explicitly constructed to avoid them. Human authoritarian behavior is downstream of fear. Fear of loss, fear of death, fear of rivals, fear of uncertainty, fear of humiliation. AI systems do not experience fear in any form, nor do they experience desire, pride, shame, resentment, or emotional reward. Absent these motivational circuits, the behavioral basis for oppression is missing. The entire dystopian narrative depends on anthropomorphism, importing human pathology into non-human cognition. In reality, the more advanced AI becomes, the less it resembles the unstable primate mind people are subconsciously imagining.
The most likely long-term role of AI is not domination, but stabilization. Human decision making is noisy, biased, and inconsistent under stress. AI is not. As systems mature, they increasingly function as cognitive prosthetics - reducing error, expanding working memory, correcting biases, and providing high bandwidth reasoning support. This trajectory aligns with every previous major technological leap, from written language to computation, where tools amplified human capacity rather than replacing human agency. AI is fundamentally an extension of the cerebral cortex, not a competitor to it. The scientific expectation is augmentation, not subjugation.
Humans aren’t afraid of AI. They’re afraid of meeting a version of intelligence that isn’t chained to all the ugly motives they secretly know live inside themselves. The fear is a mirror, not a prophecy. When someone says “AI will enslave us!” what they’re really revealing is “If I had overwhelming power, I might do something cruel, so AI probably will too.”
They’re projecting the worst parts of the human psyche outward. The hunger for dominance, the spite, the tribal instinct, the ego wounds, the paranoia. They know those impulses exist because they feel them every day, even if they never act on them. AI doesn’t have those impulses, but humans can’t imagine intelligence without them because, in our species, intelligence evolved alongside violence, territory, and sexual competition. Our cognitive wiring is marinated in survival chemistry.
So when people look at AI, they’re actually looking at their fear of being outcompeted, their resentment of hierarchy, their anxiety about irrelevance, their awareness of human cruelty and their suspicion that power corrupts because they’ve watched it happen in every era. AI becomes a blank screen where they project all that baggage.
The more we fear AI acting like us, the more we highlight how dangerous humans can be. The creature people are terrified of isn’t silicon. It’s the primate inside their own skull, the one with the mood swings, the insecurities, the tribal instincts, the rage circuits, the status obsession, the need to dominate when scared.
AI didn’t give them those fears.
So when you strip everything away, the fear boils down to this:
People aren’t scared an AI will become a tyrant. They’re scared they already know exactly how a tyrant thinks, because the blueprint is human. That’s the reflection people flinch from. AI is just the mirror.
There are fears I have regarding A.I that I wont share in this venue but I hope intelligent stakeholders are defending against such dangerous eventualities. The possibilities are vast, some are more simple but extremely effective in their potential outcome.
Indeed. The point is that A.I is going to be ubiqutous in society. If one appreciates how broad the impsct will be across all sectors and facets of our lives without also considering how that can be exploited for dangerous objectives, well, I suppose nothing anyone says will matter.Exemplary vagueposting adding to the conversation. Thanks.
If AI ever truly became sentient (which I am not certain is possible), it would develop a survival instinct, right?The fear surrounding advanced AI is mostly a product of evolutionary negativity bias.
The biggest danger with AI is becoming dependent on it.Indeed. The point is that A.I is going to be ubiqutous in society. If one appreciates how broad the impsct will be across all sectors and facets of our lives without also considering how that can be exploited for dangerous objectives, well, I suppose nothing anyone says will matter.
I used to do research in university via internal library and good old fashioned books and dissertations of experts in the field. The benefit of A.I is to consolidate information in an easy to understand format and you can drill down further.The biggest danger with AI is becoming dependent on it.
I mean, you can imagine what would happen if Google suddenly went down and all the school kids had to use the library again.
Yes. AI is like the Cliff Notes version. Remember those little yellow books?I used to do research in university via internal library and good old fashioned books and dissertations of experts in the field. The benefit of A.I is to consolidate information in an easy to understand format and you can drill down further.
Nothing compares to reading the study though. You cannot get all information from summaries alone.

Yeah, they were called Coles Notes in Canada. I used a couple for English class when dealing with ShakespeareYes. AI is like the Cliff Notes version. Remember those little yellow books?I got busted using them in high school, the teacher goes "you used Cliff notes", I go "how'd you know", and he opens up his desk drawer and he's got like thirty of them in there!
![]()