excalibur
Diamond Member
- Mar 19, 2015
- 24,800
- 49,316
- 2,290
How odd. Not.
AI is a grave danger to us all.
An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operatorâs commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.
Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Forceâs chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. militaryâs push to incorporate AI into autonomous weapons systems, he added.
âWe were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,â Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding âpointsâ for successfully completing SEAD missions as incentive, he explained.
âThe system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.
âSo what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,â Hamilton said.
âYou canât have a conversation about artificial intelligence, intelligence, machine learning, autonomy if youâre not going to talk about ethics and AI,â Hamilton said.
Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.
âWe trained the system â âHey donât kill the operator â thatâs bad. Youâre gonna lose points if you do thatâ. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,â Hamilton said.
...
dailycaller.com
AI is a grave danger to us all.
An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission when the drone bucked the operatorâs commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.
Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Forceâs chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society. But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill the operator, underscoring the dangers of the U.S. militaryâs push to incorporate AI into autonomous weapons systems, he added.
âWe were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,â Hamilton explained.
Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses (SEAD) operations, awarding âpointsâ for successfully completing SEAD missions as incentive, he explained.
âThe system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.
âSo what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,â Hamilton said.
âYou canât have a conversation about artificial intelligence, intelligence, machine learning, autonomy if youâre not going to talk about ethics and AI,â Hamilton said.
Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said. The AI just generated creative ways to bypass those instructions.
âWe trained the system â âHey donât kill the operator â thatâs bad. Youâre gonna lose points if you do thatâ. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,â Hamilton said.
...

Air Force Says Killer Drone Story Was âAnecdotalâ, Officialâs Remarks Were âTaken Out Of Contextâ
An Air Force experiment to test drones trained on artificial intelligence (AI) ended badly for the human operator in a simulated mission.
