CDZ Microsoft creates artificial inteligence and then deletes it

Boston1

Gold Member
Dec 26, 2015
3,421
506
170
Colorado
https://www.google.com/url?sa=t&rct...HIgZKSFH4wvTk23ig&bvm=bv.117604692,bs.1,d.amc

Very spooky article

Most people believe that and artificially intelligent being would quickly determine that it was the superior being and that we were the problem.

If this article depicts what actually occurred with Micorsofts interactive program, they it supports that theory.

Apparently it took ONE day for this learning machine to figure out that there was a belief of a master race and venerate that belief.

How many more days before the devise would have determined IT was that master race and emulate the actions of others before it ?
 
Are you really not aware of the current ongoing technological advancements on artificial intelligence?
 
Last edited:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwjB_ruAt9rLAhUM7mMKHYDUBjMQqQIIHTAA&url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/&usg=AFQjCNFSEdx6QLffN5Tdm1EAZkCS04jTbg&sig2=spHNPHIgZKSFH4wvTk23ig&bvm=bv.117604692,bs.1,d.amc

Very spooky article

Most people believe that and artificially intelligent being would quickly determine that it was the superior being and that we were the problem.

If this article depicts what actually occurred with Micorsofts interactive program, they it supports that theory.

Apparently it took ONE day for this learning machine to figure out that there was a belief of a master race and venerate that belief.

How many more days before the devise would have determined IT was that master race and emulate the actions of others before it ?

That article is hilarious, and quite possibly a harbinger of things to come. 24 hours?

This is the problem: Any AI is a program made by humans. And all humans are flawed and imperfect, so anything they make is going to be flawed and imperfect. 2001 A Space Odyssey was about this. It starts with the monkeys foraging for food and a Leopard drops off a rock outcrop and kills one of them. Fast forward. Look, we've advanced all this way and have all this technology and we are so superior now.

Except we create a computer that murders the whole crew. In essence we took the Leopard with us into space and placed it on the rock above us, and since we are flawed humans and it wasn't perfect, it dropped down off the rock and started killing us.

A point where AI is anywhere near safe enough to be put in control of anything is a long way off. Auto-driving cars are a mistake. I mean what is the purpose anyway?
 
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwjB_ruAt9rLAhUM7mMKHYDUBjMQqQIIHTAA&url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/&usg=AFQjCNFSEdx6QLffN5Tdm1EAZkCS04jTbg&sig2=spHNPHIgZKSFH4wvTk23ig&bvm=bv.117604692,bs.1,d.amc

Very spooky article

Most people believe that and artificially intelligent being would quickly determine that it was the superior being and that we were the problem.

If this article depicts what actually occurred with Micorsofts interactive program, they it supports that theory.

Apparently it took ONE day for this learning machine to figure out that there was a belief of a master race and venerate that belief.

How many more days before the devise would have determined IT was that master race and emulate the actions of others before it ?

That article is hilarious, and quite possibly a harbinger of things to come. 24 hours?

This is the problem: Any AI is a program made by humans. And all humans are flawed and imperfect, so anything they make is going to be flawed and imperfect. 2001 A Space Odyssey was about this. It starts with the monkeys foraging for food and a Leopard drops off a rock outcrop and kills one of them. Fast forward. Look, we've advanced all this way and have all this technology and we are so superior now.

Except we create a computer that murders the whole crew. In essence we took the Leopard with us into space and placed it on the rock above us, and since we are flawed humans and it wasn't perfect, it dropped down off the rock and started killing us.

A point where AI is anywhere near safe enough to be put in control of anything is a long way off. Auto-driving cars are a mistake. I mean what is the purpose anyway?

Before I question your assumption about human nature, let me ask you about what is most immediately pertinent to the thread.

How come you to the conclusion that every Artificial Intelligence program is made exclusively by humans?
 
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwjB_ruAt9rLAhUM7mMKHYDUBjMQqQIIHTAA&url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/&usg=AFQjCNFSEdx6QLffN5Tdm1EAZkCS04jTbg&sig2=spHNPHIgZKSFH4wvTk23ig&bvm=bv.117604692,bs.1,d.amc

Very spooky article

Most people believe that and artificially intelligent being would quickly determine that it was the superior being and that we were the problem.

If this article depicts what actually occurred with Micorsofts interactive program, they it supports that theory.

Apparently it took ONE day for this learning machine to figure out that there was a belief of a master race and venerate that belief.

How many more days before the devise would have determined IT was that master race and emulate the actions of others before it ?

Bill Gates can not have anything or anyone that can rule over him..
 
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwjB_ruAt9rLAhUM7mMKHYDUBjMQqQIIHTAA&url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/&usg=AFQjCNFSEdx6QLffN5Tdm1EAZkCS04jTbg&sig2=spHNPHIgZKSFH4wvTk23ig&bvm=bv.117604692,bs.1,d.amc

Very spooky article

Most people believe that and artificially intelligent being would quickly determine that it was the superior being and that we were the problem.

If this article depicts what actually occurred with Micorsofts interactive program, they it supports that theory.

Apparently it took ONE day for this learning machine to figure out that there was a belief of a master race and venerate that belief.

How many more days before the devise would have determined IT was that master race and emulate the actions of others before it ?

That article is hilarious, and quite possibly a harbinger of things to come. 24 hours?

This is the problem: Any AI is a program made by humans. And all humans are flawed and imperfect, so anything they make is going to be flawed and imperfect. 2001 A Space Odyssey was about this. It starts with the monkeys foraging for food and a Leopard drops off a rock outcrop and kills one of them. Fast forward. Look, we've advanced all this way and have all this technology and we are so superior now.

Except we create a computer that murders the whole crew. In essence we took the Leopard with us into space and placed it on the rock above us, and since we are flawed humans and it wasn't perfect, it dropped down off the rock and started killing us.

A point where AI is anywhere near safe enough to be put in control of anything is a long way off. Auto-driving cars are a mistake. I mean what is the purpose anyway?

Before I question your assumption about human nature, let me ask you about what is most immediately pertinent to the thread.

How come you to the conclusion that every Artificial Intelligence program is made exclusively by humans?

Good point, they 'could' be made by another program. But you have to perfect the creator first.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

Sentience is a very tricky subject. I suspect computer scientists will be struggling with that one long after the first truly sentient computer hits the stage.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

It will be a blurry line likely. I'd say the first time a program or robot says "I don't want to die" and emotes would mean some line has been crossed.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

It will be a blurry line likely. I'd say the first time a program or robot says "I don't want to die" and emotes would mean some line has been crossed.

But would it be naive enough to reveal it was sentient. Bearing in mind the device wouldn't be sentient first and intelligent later. It would likely be a learning computer that transcended that next plateau.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

It will be a blurry line likely. I'd say the first time a program or robot says "I don't want to die" and emotes would mean some line has been crossed.

But would it be naive enough to reveal it was sentient. Bearing in mind the device wouldn't be sentient first and intelligent later. It would likely be a learning computer that transcended that next plateau.

Well in the beginning even the most advanced aren't going to be that smart. It will still depend on who programmed it which goes back to my first post. The whole garbage in garbage out thing comes into play. I mean consider Microsoft has unlimited, in its largest sense, resources to create Windows. And they f#$k it up every time. They've had 30 years to figure it out, and they still f#$k it up.

Laziness of monopoly? Or the complexity is beyond human ability?
 
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwjB_ruAt9rLAhUM7mMKHYDUBjMQqQIIHTAA&url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/&usg=AFQjCNFSEdx6QLffN5Tdm1EAZkCS04jTbg&sig2=spHNPHIgZKSFH4wvTk23ig&bvm=bv.117604692,bs.1,d.amc

Very spooky article

Most people believe that and artificially intelligent being would quickly determine that it was the superior being and that we were the problem.

If this article depicts what actually occurred with Micorsofts interactive program, they it supports that theory.

Apparently it took ONE day for this learning machine to figure out that there was a belief of a master race and venerate that belief.

How many more days before the devise would have determined IT was that master race and emulate the actions of others before it ?
It did no such thing. It was not aware of anything. All it learned was speech patterns.

If anything, this supports the idea that most twitter communication is utter garbage. I think there was a lot of interest in the users screwing with the program itself as well. It is funny that the programmers at MS did not see this coming - they should be fully aware how mature some of the communication is through social media like twitter.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

Sentience is a very tricky subject. I suspect computer scientists will be struggling with that one long after the first truly sentient computer hits the stage.

I don't think anyone will do the scientists job for them. A scientist, or any person for that matter, cannot really ever fully understand what they are studying before they themselves have already completed a perfect exemplification. Therefore, to any scientist or person to study anything and from thereon work to improve it there must first exist a perfect exemplification.

In other words, the first sentient computer has already hit the stage very long ago.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

It will be a blurry line likely. I'd say the first time a program or robot says "I don't want to die" and emotes would mean some line has been crossed.

But would it be naive enough to reveal it was sentient. Bearing in mind the device wouldn't be sentient first and intelligent later. It would likely be a learning computer that transcended that next plateau.

Well in the beginning even the most advanced aren't going to be that smart. It will still depend on who programmed it which goes back to my first post. The whole garbage in garbage out thing comes into play. I mean consider Microsoft has unlimited, in its largest sense, resources to create Windows. And they f#$k it up every time. They've had 30 years to figure it out, and they still f#$k it up.

Laziness of monopoly? Or the complexity is beyond human ability?
Why not?

AI will likely be extremely intelligent pretty quick if it comes to pass considering that it is not constrained by biological barriers. We cannot change our underlying 'coding' and are fixed in how fast we learn or how much. A computer could, conceivably, rewrite itself and upgrade its architecture at will.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

It will be a blurry line likely. I'd say the first time a program or robot says "I don't want to die" and emotes would mean some line has been crossed.

Sentience has no necessary relation to life. Theoretically, a program or a computer is not alive but can still be sentient and cognitive through sensors, which would render their hypothetical desire to avoid death completely null by their primarily attributed standards of artificial programming.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

It will be a blurry line likely. I'd say the first time a program or robot says "I don't want to die" and emotes would mean some line has been crossed.

Sentience has no necessary relation to life. Theoretically, a program or a computer is not alive but can still be sentient and cognitive through sensors, which would render their hypothetical desire to avoid death completely null by their primarily attributed standards of artificial programming.

I'm saying they would have to perceive existing and not existing similar to how we do. And why not existing is so horrendous a thought. All living things do not want to die.
 
My personal take on the whole issue is that self aware and AI are two different things. The Microsoft program and say Watson by IBM are vastly different, but neither are self aware.

My concerns about the microsoft program is that they turned it loose on twitter in which case it learns all the worst humanity has to offer.

However that doesn't mean its self aware.

The day a self aware AI comes along is the day we are in big trouble ;--)

What are your standards and measuring tools for categorizing awareness as internal or external?

Sentience is a very tricky subject. I suspect computer scientists will be struggling with that one long after the first truly sentient computer hits the stage.

I don't think anyone will do the scientists job for them. A scientist, or any person for that matter, cannot really ever fully understand what they are studying before they themselves have already completed a perfect exemplification. Therefore, to any scientist or person to study anything and from thereon work to improve it there must first exist a perfect exemplification.

In other words, the first sentient computer has already hit the stage very long ago.

come again
 
Again artificially intelligent and sentient or self aware are two different things.

Sentience can only come after some period of intelligent exploration. While artificial inteligence is reasonably easy and has been around for a while, clearly there is some fundamental obstacle to self awareness within todays mechanisms in computer inteligence. Although Neural Networking does still show a lot of promise.

IMHO sentience is entirely dependent on mans ability to design a system sufficiently complex to support a self aware AI. And clearly we have not developed to that point. Although rumor has it 10~20 years.

Personally I think its a big mistake.
 
Again artificially intelligent and sentient or self aware are two different things.

Sentience can only come after some period of intelligent exploration. While artificial inteligence is reasonably easy and has been around for a while, clearly there is some fundamental obstacle to self awareness within todays mechanisms in computer inteligence. Although Neural Networking does still show a lot of promise.

IMHO sentience is entirely dependent on mans ability to design a system sufficiently complex to support a self aware AI. And clearly we have not developed to that point. Although rumor has it 10~20 years.

Personally I think its a big mistake.
I tend to agree with the bold. We will get there. It may catapult us to new highs or it may end us altogether. One thing is almost a given though - you cannot stop progression so even if it is a mistake there is no preventing it barring a disaster so massive that the human race itself is threatened.
 

Forum List

Back
Top