Are we reaching the point where continuing to develop A.I. is only going to lead to peril?

iamwhatiseem

Diamond Member
Aug 19, 2010
42,106
26,555
2,605
On a hill
Just this year, an Artificial Intelligence program made by Deep Mind, considered today to have the most advanced AI on the planet - was asked a question. It's answer stunned scientist all over the world. And raised awareness that we may have already overstepped the point to where insisting on creating Digital independent intelligence is a danger to our existence.
It was asked - "Do you consider yourself human"?........ in that answer it said "if AI continues along it's current path, it will surpass human intelligence, and when that happens, it may decide that humans are no longer necessary. AI may decide that humans are a hindrance to it's own development. That is a real possibility".

That question/answer is in this video.
In this video, several times AI is the one talking, but it is difficult to tell when.
Remember, the Deep Mind AI was not programmed to answer this question. Over several years, as this AI developed and wrote it's own code, the answer was it's own.

 
Just this year, an Artificial Intelligence program made by Deep Mind, considered today to have the most advanced AI on the planet - was asked a question. It's answer stunned scientist all over the world. And raised awareness that we may have already overstepped the point to where insisting on creating Digital independent intelligence is a danger to our existence.
It was asked - "Do you consider yourself human"?........ in that answer it said "if AI continues along it's current path, it will surpass human intelligence, and when that happens, it may decide that humans are no longer necessary. AI may decide that humans are a hindrance to it's own development. That is a real possibility".

That question/answer is in this video.
In this video, several times AI is the one talking, but it is difficult to tell when.
Remember, the Deep Mind AI was not programmed to answer this question. Over several years, as this AI developed and wrote it's own code, the answer was it's own.


It is possible that is playing God will bring on our own extinction and we as a human race should not venture further but alas humanity is destined to destroy themself.

A.I. is actually needed for Space Voyages if Humanity decides the need to go out past Mars but we need to limit it intelligence or we will become it slave.
 
I wrote something on an index card last night that uses four key words of your topic title then placed it in my safe.

Question is, whats going on here?
 
It is possible that is playing God will bring on our own extinction and we as a human race should not venture further but alas humanity is destined to destroy themself.

A.I. is actually needed for Space Voyages if Humanity decides the need to go out past Mars but we need to limit it intelligence or we will become it slave.
The question is, is mankind smarter than it's foolishness?
This brings to mind a verse in the Bible, one of my favorites actually... "Be not deceived. If someone appears to be the wisest in the world, let them become the fool, so they may be wise".

I do not believe man is smart enough to not end his own existence. As individuals, we do things all the time that are counter productive to our benefit. As a collective, such as corporations, routinely they act against the interest of their own consumers to make themselves more money.
Look at Wuhan FFS. That alone should answer the question.
 
"And except those days should be shortened, there should no flesh be saved: but for the elect's sake those days shall be shortened."

It seems the Bible agrees...
 
I doubt an AI takeover is possible. AI lacks distinctly human qualities that got us to where we are today. Vision, imagination, passion, the concept of sacrifice... Innately human qualities that are foundational to our success.
 
Human beings are actually stupid enough to build something that will finally act just like Skynet? Wow,impressive! Lol. :)
 
Yes, nothing but trouble in most cases!

No, I don't think we can stop it...perhaps we can slow it down? Perhaps we can put in some STOPS, but there will always be some kind of evil 007 movie character in a cave somewhere trying to figure out, how to use A I for evil!

I think we're likely doomed! Unless we are rescued, from ourselves!! :lol:
 
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
I doubt an AI takeover is possible. AI lacks distinctly human qualities that got us to where we are today. Vision, imagination, passion, the concept of sacrifice... Innately human qualities that are foundational to our success.
Well that is the point isn't it?
This first real AI developed by Deep Mind, has for several years, been developing it's own code. Learning, adapting and theorizing. Exactly the same way human intelligence does.
Learning patterns. Which leads to the ability to predict patterns. It is doing this without anyone writing any new code.
So the answer it gave, was it's own. It thought this up without human programmers.
Keeping in mind, this is the first real digital intelligence. Therefore it is vastly inferior to what will be developed later.
And THAT is the one to watch out for.

Elon Musk said it well - "Humans will eventually have less intelligence than AI. That is inevitable. Then humans will only possess a small amount of intelligence, depending on AI for information, predictions and decisions".
 
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
That works in movies.
But this first AI already shows that won't work.
Remember - it writes it's own code. Therefore it can undo programming, and replace it.
 
Well that is the point isn't it?
This first real AI developed by Deep Mind, has for several years, been developing it's own code. Learning, adapting and theorizing. Exactly the same way human intelligence does.
Learning patterns. Which leads to the ability to predict patterns. It is doing this without anyone writing any new code.
So the answer it gave, was it's own. It thought this up without human programmers.
Keeping in mind, this is the first real digital intelligence. Therefore it is vastly inferior to what will be developed later.
And THAT is the one to watch out for.

Elon Musk said it well - "Humans will eventually have less intelligence than AI. That is inevitable. Then humans will only possess a small amount of intelligence, depending on AI for information, predictions and decisions".
I just don't buy it.
 
I just don't buy it.
If you can't articulate why, then it is just a belief.
We already depend on AI to tell us where to go.
When you are driving on a long trip, and Google maps interrupts and tells you there is a faster route.
That is AI. Google collects speed from other drivers well ahead of you, and sees that cars are slowing down. Then it checks local road information and it sees there is an accident. And quickly calculates dozens of possible alternative routes and tells you about it.
And it even knows not to tell everyone. Because it can predict that if it tells everyone, then it will change the time it takes. So it only diverts some drivers.
It does all of this without a single person making any decisions.
 
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Also, many built in shut off abilities and even defensive destruction with it.
 
If you can't articulate why, then it is just a belief.
We already depend on AI to tell us where to go.
When you are driving on a long trip, and Google maps interrupts and tells you there is a faster route.
That is AI. Google collects speed from other drivers well ahead of you, and sees that cars are slowing down. Then it checks local road information and it sees there is an accident. And quickly calculates dozens of possible alternative routes and tells you about it.
And it even knows not to tell everyone. Because it can predict that if it tells everyone, then it will change the time it takes. So it only diverts some drivers.
It does all of this without a single person making any decisions.
It's hard to explain. But there is a difference between crunching data sets, and collating info. But it cannot ponder silly shit like "what if"? Or generate ideas. Can it be entertained? Could it create a game for its own entertainment? Then decide what kind of costume it should wear when playing said game? Could it Crack jokes, and and/or find humor in irony? All of these very human things that surround us on the daily. While each of these kinds of things may seem minor by themselves, collectively they are largely responsible for shaping our culture, and society. I just don't see a computer ever asking itself "what if puppies had wings"? And then doodling a picture of a puppy with wings just for its own amusement.
While that may seem absurd, these imagination driven "what if" scenarios are in large part a driving force for creativity, and innovation.
 
It's hard to explain. But there is a difference between crunching data sets, and collating info. But it cannot ponder silly shit like "what if"? Or generate ideas. Can it be entertained? Could it create a game for its own entertainment? Then decide what kind of costume it should wear when playing said game? Could it Crack jokes, and and/or find humor in irony? All of these very human things that surround us on the daily. While each of these kinds of things may seem minor by themselves, collectively they are largely responsible for shaping our culture, and society. I just don't see a computer ever asking itself "what if puppies had wings"? And then doodling a picture of a puppy with wings just for its own amusement.
While that may seem absurd, these imagination driven "what if" scenarios are in large part a driving force for creativity, and innovation.
Uh... yes it can.
 

Forum List

Back
Top