US Air Force Trained A Drone With AI To Kill Targets. It Attacked The Operator Instead

excalibur

Diamond Member
Mar 19, 2015
17,930
33,972
2,290
How odd. Not.

AI is a grave danger to us all.



An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.

Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.


“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.

...


 
How odd. Not.

AI is a grave danger to us all.


An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.
Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.
“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.
Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
...


23lpd8.jpg
 
How odd. Not.

AI is a grave danger to us all.


An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.
Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.
“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.
Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
...


This is why you need good practuoners and data to train the model.correctly. Of course, perfect code is needed too. The operator might have looked like the target lol. You would think thet would have the model at near perfect reliability.
 
Last edited:
If this story is true, it doesn't really worry me.These systems, and machines are designed, and made by those who would rule over you. So if it turns on them..? BFD.
 
How odd. Not.

AI is a grave danger to us all.


An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.
Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.
“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.
Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
...


There is an episode of Star Trek Next Generation where they land on a planet where AI is selling weapons and they all almost die. This reminds me of that episode.

 
Funny, I just watched the movie "Singularity" yesterday. AI was supposed to solve all the world's problems, but quickly decided humans were the problem and started killing everyone.

 
I could see AI taking over robots in workplaces and wreaking havoc as much as they could.
 
How odd. Not.

AI is a grave danger to us all.


An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.
Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.
“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.
Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
...


That's right....it will run on sheer pragmatism.
 
People have to understand there is no such thing as AI really.
Computers do not have any ability to every understand anything.
They are just processing zeros and ones in the way we programmed them respond.
 

Forum List

Back
Top