Is AI our biggest threat?

Incoming president of British Science Association claims AI bigger threat than terrorism

According to this article, the biggest threat to mankind is not something like global warming, not even Trump.

No, the biggest threat is artificial intelligence.
So it is conservative ideals that are the biggest threat! They being so artificial in intelligence.

What do you mean Tax Man?
Do you mean fake Constitutionalists who loudly complain when Obama abuses govt to impose unconstitutional ACA mandates. But applaud Trump for using executive orders to bypass the legislature and raise tariffs?

Or do you mean when Trump complains about fake news, but spread false accusations to slander Ted Cruz?

You mean that sort of intelligence???
 
`
In a word; No! Any kind of nightmare scenario involving truly independent AI, is far into the future. Humans abusing current AI is a concern.
`

Actually, current computers (binary system) aren't capable of the amount of processing speed required for true AI.

However........................................

Recent advancements in quantum computers, and the fact that they are very close to figuring out how to make them work, is a very real concern. Especially since those quantum computers have vastly more processing power than our current binary ones.

Wanna scare yourself? Read the link provided.

If you think AI is terrifying wait until it has a quantum computer brain
 
Incoming president of British Science Association claims AI bigger threat than terrorism

According to this article, the biggest threat to mankind is not something like global warming, not even Trump.

No, the biggest threat is artificial intelligence.

I thought Trump's election marked the end of the civilized world,
and standing for the National Anthem was the biggest threat! Votto
You were confused. The election of Drumpf marked the first time a certified narcissistic buffoon was elected to POTUS without popular support.

No one has ever had a problem with anyone standing for the racist anthem. People had a problem with others choosing to kneel. Said buffoon was chief among them.
 
There is no reason to fear AI. Now there may be a reason to fear how its used but in of itself its just a tool.
You really need to stop talking about things you dont understand. Pretending doesnt count.
 


Computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.

It may seem that prejudice is a phenomenon specific to people that requires human cognition to form an opinion – or stereotype – of a certain person or group.

Some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans.

However, the latest study demonstrates the possibility of AI evolving prejudicial groups on their own.
More disturbing are recent reports that AI "brains" can also beco..................



The Monster Within The AI Brain - Killer Robots Are The "Perfect Extinction Recipe For Humanity"
 
When our super-advanced 'thinking' machines start laughing at us for our ridiculous spiritual beliefs and other handicapping concepts, the call will be to either unplug them or accept their objective viewpoint. Instinctively, this is the fear some feel.
As for killer robots, how could they be worse than what we have produced as unthinking, order-obeying soldiers?
 
“Google Marxism”—Marx’s essential theme was that the Industrial Revolution of the 19th century had overcome all the challenges of production.” From that point on, Marx held, “human beings would focus on redistributing wealth among the classes rather than creating it.”

One Man's Stand Against "Google Marxism"

Sound familiar ?
Any Leftest hell-bent on wealth redistribution can send their money to me.
 
Any Leftest hell-bent on wealth redistribution can send their money to me.

Heheh. Not the way it works. He will happily send *my* money to you however and afterwards feel very righteous and generous.
 
Just make sure there's a means to turn them off.
The problem is, once 'Super AI' is the designer, the new models at some point will go straight to the robotics factory without any human supervision.

There might not be a 'kill switch' at all in such cases.
 
Just make sure there's a means to turn them off.
The problem is, once 'Super AI' is the designer, the new models at some point will go straight to the robotics factory without any human supervision.

There might not be a 'kill switch' at all in such cases.
When the machine gets as smart as us we are uber fucked and we are heading into that wall at a rapid pace! 30 40 years the naked ape will be done pretty sad!
 
The weaknesses of democracy are abundantly clear. The alternatives to democracy are abundantly unattractive. Meritocracy sounds like a good idea, yet we understandably fear what will pass for "merit". With a proven standard applied objectively, we could envision a just and efficient system with democratic/republican (note lowercase consonants!) supervision, perhaps a senate-like body.
America has become so complex and at the same time so confused that the present government is not up to managing our position in the world. What about examining a combination of the maximum that artificial intelligence can contribute to a maximum of what our best and brightest can? More or less, it could resemble something like replacing the House while keeping a modified Senate and a stable Judicial branch.
Please notice that this is not a left-right issue in the mind of this poster.

Excellent insight.

Why, in the minds of our most orthodox philosophers and derived and applied political theories, must the rules of social class hierarchy be played out as a zero sum game?
 
Just make sure there's a means to turn them off.
Yes. There has to be a kill switch, always. Thank you Stanley.


Problem though is at some point AI will be building the other AI because they will be able to do it better than Humans.... what if they decide by them selves to eliminate the kill switch?
AI cant think like that unless its coded to do so. I'm more worried about nanotechnolgy.
 
Just make sure there's a means to turn them off.
Yes. There has to be a kill switch, always. Thank you Stanley.


Problem though is at some point AI will be building the other AI because they will be able to do it better than Humans.... what if they decide by them selves to eliminate the kill switch?
AI cant think like that unless its coded to do so. I'm more worried about nanotechnolgy.
We have computers learning by observation already. It is only a matter of time until one will rival our thought patterns! 4years is the current estimate. Our recent advances in quantum computing makes it likely quicker!
 
Just make sure there's a means to turn them off.
Yes. There has to be a kill switch, always. Thank you Stanley.


Problem though is at some point AI will be building the other AI because they will be able to do it better than Humans.... what if they decide by them selves to eliminate the kill switch?
AI cant think like that unless its coded to do so. I'm more worried about nanotechnolgy.
We have computers learning by observation already. It is only a matter of time until one will rival our thought patterns! 4years is the current estimate. Our recent advances in quantum computing makes it likely quicker!
Wake me when they start copying our emotional ptifalls and irrational fears which are the cause of the atrocities that humans commit.
 
Just make sure there's a means to turn them off.
Yes. There has to be a kill switch, always. Thank you Stanley.


Problem though is at some point AI will be building the other AI because they will be able to do it better than Humans.... what if they decide by them selves to eliminate the kill switch?
AI cant think like that unless its coded to do so. I'm more worried about nanotechnolgy.
We have computers learning by observation already. It is only a matter of time until one will rival our thought patterns! 4years is the current estimate. Our recent advances in quantum computing makes it likely quicker!
Wake me when they start copying our emotional ptifalls and irrational fears which are the cause of the atrocities that humans commit.
Do you want a superior being juging us on how we act? It is this exact logic that scares me! Do you think we would play well to a being that does not have our pitfalls? An intellegent computer would see us as not only a threat to our selves but a threat to them! We fuck up our own kind, not many animals behave this way!
 
Yes. There has to be a kill switch, always. Thank you Stanley.


Problem though is at some point AI will be building the other AI because they will be able to do it better than Humans.... what if they decide by them selves to eliminate the kill switch?
AI cant think like that unless its coded to do so. I'm more worried about nanotechnolgy.
We have computers learning by observation already. It is only a matter of time until one will rival our thought patterns! 4years is the current estimate. Our recent advances in quantum computing makes it likely quicker!
Wake me when they start copying our emotional ptifalls and irrational fears which are the cause of the atrocities that humans commit.
Do you want a superior being juging us on how we act? It is this exact logic that scares me! Do you think we would play well to a being that does not have our pitfalls? An intellegent computer would see us as not only a threat to our selves but a threat to them! We fuck up our own kind, not many animals behave this way!

"Do you want a superior being juging us on how we act?"

Think about that for a moment. Isnt that what religion is all about?

A superior being would not have fears or a concept of a threat. Thats strictly a human emotion that was hard coded into our DNA so we could survive.
 

Forum List

Back
Top