UFO encounters 'appear as an act of war' says expert
When you subscribe we will use the information you provide to send you these newsletters. Sometimes they’ll include recommendations for other related newsletters or services we offer. Our Privacy Notice explains more about how we use your data, and your rights. You can unsubscribe at any time.
The revelation raises concern over terminator-style AI weapons which could kill people in conflict without any human control. The drone was deployed in March last year during the conflict between the Libyan government forces and a breakaway military faction led by Khalifa Haftar, commander of the Libyan National Army.
The report on the incident from the UN Security Council’s Panel of Experts on Libya was obtained by the New Scientist magazine.
The drone was a Kargu-2 quadcopter created by Turkish military tech company STM.
The weapon has an explosive charge and can be aimed at a target and detonates on impact.
The report, published earlier this year, said how Haftar’s forces were “hunted down and remotely engaged” by the drones which were operating in a “highly effective” autonomous mode which required no human controller.
The report added: “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”
The report appears to suggest that the drones were targeting humans using their own initiative.
No deaths were confirmed in the report, however, similar weapons have caused “significant casualties” in other situations.
Homeland security specialist Zachary Kallenborn raised concerns over the accuracy of the weapons technology.
Brussels chaos: Thousands clash with police outside EU HQ in Belgium [UPDATE]
Putin ‘tempted to use or lose’ his military arsenal against West [INSIGHT]
US and China on brink of nuclear armageddon over 1958 Taiwan conflict [REVEAL]
Writing in The Bulletin of the Atomic Scientists, he said: “Current machine learning-based systems cannot effectively distinguish a farmer from a soldier.
“Farmers might hold a rifle to defend their land, while soldiers might use a rake to knock over a gun turret. … Even adequate classification of a vehicle is difficult.”
Mr Kallenborn explained how without a human to make a judgement call, the risks are too high.
He added: “Any given autonomous weapon has some chance of messing up, but those mistakes could have a wide range of consequences.
“The highest risk autonomous weapons are those that have a high probability of error and kill a lot of people when they do.
“Misfiring a .357 magnum is one thing; accidentally detonating a W88 nuclear warhead is something else.”
Source: Read Full Article