Cheer up, Elon Musk said that there ONLY a 20% chance that AI will destroy all of mankind

Votto

Diamond Member
Joined
Oct 31, 2012
Messages
69,708
Reaction score
78,890
Points
3,605

Elon Musk has a glass-half-full mentality when it comes to AI — and that means there's "only a 20% chance of annihilation," according to the billionaire.

"The probability of a good outcome is like 80%," Musk said in a "Joe Rogan Experience" podcast episode released Friday.

It's not the first time Musk has floated this probability of human annihilation, although he's previously included a range of 10% to 20%. Musk also said in the interview that he sees AI exceeding human intelligence in the next year or two. He said he expects AI to reach a level that is "smarter than all humans combined" in 2029 or 2030.

So, stop being so pessimistic about the future everyone

Here, sing along

 
Here is a question I have, for the sake of argument, let's say that the Terminator movie comes to life as AI tries to kill us all, will it finally bring humanity together and give them pause from hating each other and trying to kill each other?

And if so, how long would that last?
 

Elon Musk has a glass-half-full mentality when it comes to AI — and that means there's "only a 20% chance of annihilation," according to the billionaire.

"The probability of a good outcome is like 80%," Musk said in a "Joe Rogan Experience" podcast episode released Friday.


It's not the first time Musk has floated this probability of human annihilation, although he's previously included a range of 10% to 20%. Musk also said in the interview that he sees AI exceeding human intelligence in the next year or two. He said he expects AI to reach a level that is "smarter than all humans combined" in 2029 or 2030.

So, stop being so pessimistic about the future everyone

Here, sing along


I will be honest, the acceleration of A.I capabilities far surpassed what I thought would occur at this juncture.

As such, I have changed my mind and I believe A.I will one day destroy us. It will be due to human negligence though. It can only do what we equip it to do, but once we build A.I militaries, the risk increases significantly.

Imagine being a crocodile, here for a million years and they have seen multiple changes in the alpha animal. One day they will see metal robots running the planet lol.
 
I think humanity will destroy itself not far into the future and our only chance at survival means letting AI run the world. I don't fear AI, I fear humans.
 
I think humanity will destroy itself not far into the future and our only chance at survival means letting AI run the world. I don't fear AI, I fear humans.
I hate to break it to ya, but if AI destroys humanity it is just the same as human beings destroying themselves because they created it.

I personally believe in God who says that unless he intervenes in miraculous fashion, human beings will destroy themselves in the end times, i. e., the Second coming of Christ.

But for now, Elon and his secular cronies are hedging their bets that AI will save humanity, unless it destroys it first, but I think that is only common sense.


:laughing0301: :spinner:
 
When robots get AI and a reliable power source they will kick your butt all the way down to the station house.//

Someone has to charge them up and feed the babies too.// think bout it.
 
Last edited:
When robots get AI and a reliable power source they will kick your buttons all the way down to the station house.//

Someone has to charge them up and feed the babies too.// think bout it.
No, I think once AI gets mobile via robotics, they will no longer need humans.
 

Elon Musk has a glass-half-full mentality when it comes to AI — and that means there's "only a 20% chance of annihilation," according to the billionaire.

"The probability of a good outcome is like 80%," Musk said in a "Joe Rogan Experience" podcast episode released Friday.


It's not the first time Musk has floated this probability of human annihilation, although he's previously included a range of 10% to 20%. Musk also said in the interview that he sees AI exceeding human intelligence in the next year or two. He said he expects AI to reach a level that is "smarter than all humans combined" in 2029 or 2030.

So, stop being so pessimistic about the future everyone

Here, sing along


We don't need AI to destroy us. We're doing fine without it.
 
We don't need AI to destroy us. We are doing fine without it.
1774971615795.webp


Where did everyone go?

You mean I travelled all the way back in time to destroy humanity and they beat me to it Canadian eugenic style?

WTF?

:laughing0301:
 

Elon Musk has a glass-half-full mentality when it comes to AI — and that means there's "only a 20% chance of annihilation," according to the billionaire.

"The probability of a good outcome is like 80%," Musk said in a "Joe Rogan Experience" podcast episode released Friday.


It's not the first time Musk has floated this probability of human annihilation, although he's previously included a range of 10% to 20%. Musk also said in the interview that he sees AI exceeding human intelligence in the next year or two. He said he expects AI to reach a level that is "smarter than all humans combined" in 2029 or 2030.

So, stop being so pessimistic about the future everyone

Here, sing along


Well let's assume AI gets as smart as us if it is not all ready. Smart beings recognize threats. Smart and powerful enough endeavor to eliminate threats. If you were AI would you view us as a threat? All ya gotta do is watch the evening news. Death and mayhem all over it. The things we do to our own kind. I would say it is much higher than 20 percent chance.
 
//If you are old deadwood in hi-tech not smart enough for AI, they are coming for you//

Oracle cutting thousands in latest layoff round as company continues to ramp AI spending
 
Well let's assume AI gets as smart as us if it is not all ready. Smart beings recognize threats. Smart and powerful enough endeavor to eliminate threats. If you were AI would you view us as a threat? All ya gotta do is watch the evening news. Death and mayhem all over it. The things we do to our own kind. I would say it is much higher than 20 percent chance.
There's the Fermi Paradox that suggests there are thousands of planets with life in our own galaxy, so why haven't we ever detected even one sign of extraterrestrial life? One explanation is that as organisms grow more intelligent they first create weapons that can sterilize the planet before they develop the intelligence to not use them. The window of time between the two points must be rather small. We are not at that latter level of intelligence yet. Maybe AI can show us the way?
 

Elon Musk has a glass-half-full mentality when it comes to AI — and that means there's "only a 20% chance of annihilation," according to the billionaire.

"The probability of a good outcome is like 80%," Musk said in a "Joe Rogan Experience" podcast episode released Friday.


It's not the first time Musk has floated this probability of human annihilation, although he's previously included a range of 10% to 20%. Musk also said in the interview that he sees AI exceeding human intelligence in the next year or two. He said he expects AI to reach a level that is "smarter than all humans combined" in 2029 or 2030.

So, stop being so pessimistic about the future everyone

Here, sing along


Whew, what a relief.
 
I wonder what Ray Kurweil's analysis about that would conclude. In the past he seemed to think AI and cybernetics were the gateway to human immortality.
 
15th post
There's the Fermi Paradox that suggests there are thousands of planets with life in our own galaxy, so why haven't we ever detected even one sign of extraterrestrial life? One explanation is that as organisms grow more intelligent they first create weapons that can sterilize the planet before they develop the intelligence to not use them. The window of time between the two points must be rather small. We are not at that latter level of intelligence yet. Maybe AI can show us the way?
Well I like your optimism. My gut tells me we are f ed. Hope AI will be a good thing but I doubt it.
 
Well I like your optimism. My gut tells me we are f ed. Hope AI will be a good thing but I doubt it.
Honestly, if something doesn't step up to save us, we WILL destroy each other. People are too distracted by the fad of Trump Hate to look elsewhere and realize that globally things are really bad. We have a World #2 that wants to be #1 and a World #1 that will not let that happen. We have governments all over the world that are not focused on the big picture and only on a local snapshot, and they are polarizing into one or the other camps primed for a war. Something is going to have to break this up before it occurs and it won't be us. Humans just aren't smart enough to avoid their self-annihilation.
 
Honestly, if something doesn't step up to save us, we WILL destroy each other. People are too distracted by the fad of Trump Hate to look elsewhere and realize that globally things are really bad. We have a World #2 that wants to be #1 and a World #1 that will not let that happen. We have governments all over the world that are not focused on the big picture and only on a local snapshot, and they are polarizing into one or the other camps primed for a war. Something is going to have to break this up before it occurs and it won't be us. Humans just aren't smart enough to avoid their self-annihilation.
I don't think that is an intelligence issue. It is a greed and pettyness issue.
 
I hate to break it to ya, but if AI destroys humanity it is just the same as human beings destroying themselves because they created it.

I personally believe in God who says that unless he intervenes in miraculous fashion, human beings will destroy themselves in the end times, i. e., the Second coming of Christ.

But for now, Elon and his secular cronies are hedging their bets that AI will save humanity, unless it destroys it first, but I think that is only common sense.


:laughing0301: :spinner:
Most white evangelical Christians aren't worried about AI or Global warming because the bible didn't prophesize that being the end of humanity. And they support what's going on in Iran because they think end days are upon us. It's why they didn't agree with the Pope.

So if you believe in the 2nd coming of Jesus, don't worry about AI right? That can't be how we kill ourselves or the bible would have told us.
 
Back
Top Bottom