Does this guy have a point, or has he seen too many movies?
At a meeting of the United Nations Human Rights Council in Geneva, Switzerland, this week, Christof Heyns, the U.N. special rapporteur on extrajudicial, summary or arbitrary executions, will give a report calling for a moratorium on "lethal autonomous robotics," citing the danger of having machines around that can target and kill on their own once they're switched on.
"Machines lack morality and mortality, and as a result should not have life and death powers over humans," Heyns will say.
Fully autonomous robotic weapons systems have not yet been developed (at least, not that we know of), but research is already underway, and Heyns thinks that's a bad idea. He points to modern military use of drones -- which were once intended only as surveillance technology -- as proof of the danger of removing humans further from the consequences of making lethal decisions, and warns that if the human element is removed altogether, we could be even more detached.
"In addition to being physically removed from the kinetic action, humans would also become more detached from decisions to kill – and their execution," Heyns said.
Heyns doesn't go so far as to raise the issue of an actual Skynet-style robot uprising, but he is determined to stop a greater reliance on robotics when it comes to taking human life, particularly when there's the possibility that, sometime in the future, the robots could be allowed to make kill decisions all by themselves. The issue, at the moment, is mostly a moral one, but as sci-fi geeks we know full well where giving robots the ability to make their own murder decisions can lead.
What do you think? Do you agree with Heyns that autonomous lethal robots should be banned from ever existing?
(Via The Guardian)