Ban on deployment of ‘killer robots’ urged by rights group
The report Losing Humanity — co-produced by Harvard Law School’s International Human Rights Clinic — also raises the alarm over the ethics of the looming technology.
Calling them “killer robots,” it urges “an international treaty that would absolutely prohibit the development, production, and use of fully autonomous weapons”.
The US military already leads the way in military robots, notably the unmanned aircraft used for surveillance or attacks over Afghanistan, Yemen, and elsewhere. But these are controlled by human operators in ground bases and are not able to kill without authorisation.
Fully autonomous robots that decide for themselves when to fire could be developed within 20 to 30 years, or “even sooner,” the report said, adding weapon systems that require little human intervention already exist.
Perhaps closest to the Terminator-type killing machine portrayed in Arnold Schwarzenegger’s action films is a Samsung sentry robot already being used in South Korea, with the ability to spot unusual activity, talk to intruders and, when authorised by a human controller, shoot them.
Going fully autonomous would spare human troops from dangerous situations. The downside, though, is that robots would then be left to make highly nuanced decisions on their own, the most fraught being the need to distinguish between civilians and combatants in a war zone.



