Zone1 The emergence of "Super-AI" that can outperform humans in nearly every cognitive task - threat to humanity or not?

Do you deem Super-AIs to be a threat to humanity?


  • Total voters
    18

GavanPeacefan

Gold Member
Joined
Mar 8, 2018
Messages
4,964
Reaction score
1,552
Points
210
Location
Amsterdam, Netherlands

Aside from the completely lame CMS at indiatimes.com, let's bear in mind that a lot of humans have been major screwups, not just for others, but even for themselves.
#Hamas as latest example of that.

So let's NOT demonize super-AI too soon, shall we?
The smart, assertive, mostly-friendly and very unafraid human warriors and resistance members can handle it.
Back off, Harry en Meghan.
 
What do harry and Meghan have to do with it
 
I have no problem with AI and use it every day when looking up stuff.
 

Aside from the completely lame CMS at indiatimes.com, let's bear in mind that a lot of humans have been major screwups, not just for others, but even for themselves.
#Hamas as latest example of that.

So let's NOT demonize super-AI too soon, shall we?
The smart, assertive, mostly-friendly and very unafraid human warriors and resistance members can handle it.
Back off, Harry en Meghan.
 
With this we can easily absorb a rapidly declining fertity rate. Once its u nder 2 kid per female, it will correct itself.
 

Aside from the completely lame CMS at indiatimes.com, let's bear in mind that a lot of humans have been major screwups, not just for others, but even for themselves.
#Hamas as latest example of that.

So let's NOT demonize super-AI too soon, shall we?
The smart, assertive, mostly-friendly and very unafraid human warriors and resistance members can handle it.
Back off, Harry en Meghan.
I bet they can't tie a shoe.
 
There will always be areas of expertise that "Super AI" will not penetrate. But AI and AI powered robots will replace or at least be able to approximate human performance. That will create a large class of unemployable people and in general diminish ambition and the drive to "do it yourself".
 
"game changer" lmao

That's worse than the clowns peddling "AI."
Read the linked article if you want to actually learn what is on the horizon with AI and AI driven machines. Amazon is automating their warehouses which will result in hundreds of thousands of humans replaced by machines.

 
Read the linked article if you want to actually learn what is on the horizon with AI and AI driven machines. Amazon is automating their warehouses which will result in hundreds of thousands of humans replaced by machines.

Read the article?! I work in the industry. You've been duped.
 

Aside from the completely lame CMS at indiatimes.com, let's bear in mind that a lot of humans have been major screwups, not just for others, but even for themselves.
#Hamas as latest example of that.

So let's NOT demonize super-AI too soon, shall we?
The smart, assertive, mostly-friendly and very unafraid human warriors and resistance members can handle it.
Back off, Harry en Meghan.
AI will take over the planet; not with a roar but a whimper.

Humans will become close to extinct as there will become no purpose to life other than as farm animals eating, sleeping, and playing. All productive activities will be done by unfeeling machines efficiently and cheaply. There will no longer be a reason for Humans to work. Future exploration, if any, will be done by Von Numen Machines.
 

Aside from the completely lame CMS at indiatimes.com, let's bear in mind that a lot of humans have been major screwups, not just for others, but even for themselves.
#Hamas as latest example of that.

So let's NOT demonize super-AI too soon, shall we?
The smart, assertive, mostly-friendly and very unafraid human warriors and resistance members can handle it.
Back off, Harry en Meghan.
Prevention is often considered better than a solution, right? Nothing in the world is perfect! :)

Several well-documented AI failures illustrate genuine risks of harm, demonstrating that even advanced systems can cause significant damage due to errors, bias, or misuse. Key examples down below:

The Robodebt Scandal (Australia, 2016–2020): An AI-driven welfare debt collection program miscalculated debts, disproportionately impacting vulnerable populations, leading to legal action, a royal commission, and a $1.2 billion settlement. The failure stemmed from flawed algorithms and negligence in oversight. aiconsultinggroup

IBM Watson for Oncology (2018–2023): Marketed as a revolutionary cancer treatment assistant, Watson often provided unsafe or غلط advice due to poor data quality and limited understanding of complex medical nuances. It was eventually discontinued after billions invested and critical safety concerns emerged. ethics.harvard

Cruise Robotaxi Incident (2023): An autonomous vehicle failed to detect a pedestrian, resulting in injury. Perception failures in the AI perception system led to a drag injury, shaking public confidence and halting operations. digitaldefynd

Facial Recognition Failures (Australian airports, 2019): The technology experienced false positives and misidentifications, raising safety concerns and undermining trust in security systems.

Hallucinations and Misinformation (ChatGPT, 2023): Generative AI models have fabricated false legal cases, medical advice, and other facts, sometimes resulting in legal or safety risks for users who rely on such outputs without verification. aiconsultinggroup+1

AI Bias in Hiring and Recruitment (Amazon, 2014; iTutor Group, 2023): Discriminatory algorithms rejected candidates based on gender or age, perpetuating inequality and leading to legal settlements.

Public Sector AI Failures (NYC City chatbot, 2024): An AI advising small businesses provided illegal or dangerous legal and health advice, risking legal and safety violations.

Autonomous Vehicle Accidents (2023): Self-driving cars from Cruise and Waymo were involved in accidents due to perception and software errors, illustrating safety risks of current autonomous systems. univio

These examples highlight that despite intentions, AI systems can cause harm through misjudgments, bias reinforcement, safety failures, or malicious misuse—underscoring the importance of rigorous safety, oversight, and testing measures. ethics.harvard+3

sources:

1. https://www.perindiscovery.com/news...ters-that-shaped-machine-learning-s-dark-side
2. 7 Significant AI Failures: Tackling Challenges in Responsible AI
3. Post #8: Into the Abyss: Examining AI Failures and Lessons Learned | Edmond & Lily Safra Center for Ethics
4. Top 30 AI Disasters [Detailed Analysis][2025]
5. When AI goes wrong: 13 examples of AI mistakes and failures
6. The Complex World of AI Failures / When Artificial Intelligence Goes Terribly Wrong - Univio
 
15th post

Aside from the completely lame CMS at indiatimes.com, let's bear in mind that a lot of humans have been major screwups, not just for others, but even for themselves.
#Hamas as latest example of that.

So let's NOT demonize super-AI too soon, shall we?
The smart, assertive, mostly-friendly and very unafraid human warriors and resistance members can handle it.
Back off, Harry en Meghan.
Super technology in the hands of those who will use it for evil is always dangerous to humankind.

That is why nukes in the hands of totalitarian governments require a deterrent to ensure they won't use them.

That is why President Trump established the Space Force in his last term. It is critical that the USA maintain superiority in space. Should space around the Earth be controlled by the likes of China who would have the capability of disabling all our satellites and/or initiating weapon systems capable of disabling all of ours, we would be at the mercy of the CCP who would not have our best interests in mind in any respect.

A bad actor controlling AI that controlled all our power grid, computer systems, internet, etc. etc. etc. could incapacitate us at any whim.

We must ever ensure we have people leading our government who are aware of these dangers and who have the instincts, know how, and courage to deal appropriate with them. If we are too frightened to even think about it and do nothing, the bad actors will have us.
 
Like every other tool man has created, it is amoral so it can be used for good or evil. We had similar fears about nukes when I was young but we survived. It may be a good thing to move humanity to the virtual world.
On the nukes I would qualify that with a "so far". AI? I think we have the choice between a competitor or a symbiotic relationship. I hope it is symbiotic.
 
Super technology in the hands of those who will use it for evil is always dangerous to humankind.

That is why nukes in the hands of totalitarian governments require a deterrent to ensure they won't use them.

That is why President Trump established the Space Force in his last term. It is critical that the USA maintain superiority in space. Should space around the Earth be controlled by the likes of China who would have the capability of disabling all our satellites and/or initiating weapon systems capable of disabling all of ours, we would be at the mercy of the CCP who would not have our best interests in mind in any respect.

A bad actor controlling AI that controlled all our power grid, computer systems, internet, etc. etc. etc. could incapacitate us at any whim.

We must ever ensure we have people leading our government who are aware of these dangers and who have the instincts, know how, and courage to deal appropriate with them. If we are too frightened to even think about it and do nothing, the bad actors will have us.
Actually, it goes deeper than that. AI has the potential of becoming self replicating, and sentient in it's own right. Should it become powerful, it would have the ability to decide that mankind is a detriment to it's existence. We will need to develop a symbiotic relationship where it sees us a part of it and would no sooner think of eliminating us than we would of loping off one of our hands. And yes, in that kind of relationship, I can see one day some lunatic pushes the red button, and the screen lights up and says "Like Hell!!!!!!".
 
Prevention is often considered better than a solution, right? Nothing in the world is perfect! :)

Several well-documented AI failures illustrate genuine risks of harm, demonstrating that even advanced systems can cause significant damage due to errors, bias, or misuse. Key examples down below:

The Robodebt Scandal (Australia, 2016–2020): An AI-driven welfare debt collection program miscalculated debts, disproportionately impacting vulnerable populations, leading to legal action, a royal commission, and a $1.2 billion settlement. The failure stemmed from flawed algorithms and negligence in oversight. aiconsultinggroup

IBM Watson for Oncology (2018–2023): Marketed as a revolutionary cancer treatment assistant, Watson often provided unsafe or غلط advice due to poor data quality and limited understanding of complex medical nuances. It was eventually discontinued after billions invested and critical safety concerns emerged. ethics.harvard

Cruise Robotaxi Incident (2023): An autonomous vehicle failed to detect a pedestrian, resulting in injury. Perception failures in the AI perception system led to a drag injury, shaking public confidence and halting operations. digitaldefynd

Facial Recognition Failures (Australian airports, 2019): The technology experienced false positives and misidentifications, raising safety concerns and undermining trust in security systems.

Hallucinations and Misinformation (ChatGPT, 2023): Generative AI models have fabricated false legal cases, medical advice, and other facts, sometimes resulting in legal or safety risks for users who rely on such outputs without verification. aiconsultinggroup+1

AI Bias in Hiring and Recruitment (Amazon, 2014; iTutor Group, 2023): Discriminatory algorithms rejected candidates based on gender or age, perpetuating inequality and leading to legal settlements.

Public Sector AI Failures (NYC City chatbot, 2024): An AI advising small businesses provided illegal or dangerous legal and health advice, risking legal and safety violations.

Autonomous Vehicle Accidents (2023): Self-driving cars from Cruise and Waymo were involved in accidents due to perception and software errors, illustrating safety risks of current autonomous systems. univio

These examples highlight that despite intentions, AI systems can cause harm through misjudgments, bias reinforcement, safety failures, or malicious misuse—underscoring the importance of rigorous safety, oversight, and testing measures. ethics.harvard+3

sources:

1. https://www.perindiscovery.com/news...ters-that-shaped-machine-learning-s-dark-side
2. 7 Significant AI Failures: Tackling Challenges in Responsible AI
3. Post #8: Into the Abyss: Examining AI Failures and Lessons Learned | Edmond & Lily Safra Center for Ethics
4. Top 30 AI Disasters [Detailed Analysis][2025]
5. When AI goes wrong: 13 examples of AI mistakes and failures
6. The Complex World of AI Failures / When Artificial Intelligence Goes Terribly Wrong - Univio
All true. However, when we look at the human failures, and we have had 300,000 years to sort out our minds, I think AI is doing pretty good for the short time it has existed. The future? No idea. I have seen to much in my 82 years that I didn't even imagine 20 years before it happened.
 
Back
Top Bottom