Interesting enough, The Conversation believes once these AI drones are commercialized, there will be “vast legal and ethical implications for wider society.” Nevertheless, the sphere of warfare could soon expand to include technology companies, engineers, and scientists, who would be labeled as valid military targets because of their involvement in building code for the machines.

The Conversation makes a stunning revelation about the legal implications of Silicon Valley technology firms who provide lines of code to autonomous drone weapon systems. Under the international humanitarian law, “dual-use” facilities – “those which develop products for both civilian and military application – can be attacked in the right circumstances.”

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

 

The legal implications of these developments are already becoming evident. Under the current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked in the right circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked because it could fuel Yugoslav tanks as well as fuel civilian cars.

 

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civilian contributor to such lethal autonomous systems.”

The Conversation reminds us of the recent events of autonomous AI in society “should serve as a warning.”

“Uber and Tesla’s fatal experiments with self-driving cars suggest it is pretty much guaranteed that there will be unintended autonomous drone deaths as computer bugs are ironed out.”

If militarized AI machines are left to the decision-making of who dies…  We ask one simple question: how many non-combatant deaths will count as acceptable to the Army as the AI drone technology is refined?

This article was originally published by ZeroHedge.com.

Call Us