Syfy Insider Exclusive

Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!

Sign Up For Free to View
SYFY WIRE Artificial Intelligence

Worry not! Defense Department warbots must adhere to ‘ethical standards’

By Adam Pockross
Multi-Utility Tactical Transport.JPG

We may not be at the level of RoboCop’s ED-209 Enforcement Droid just yet, but the U.S. has been employing lethal robots in military service for some time. However, with artificial intelligence becoming more efficient, the Army is seeking outside help garnering quicker response times in its Advanced Targeting and Lethality Automated System (ATLAS). Fear not, though, the Defense Department wants you to know that just because these autonomous killing machines will be more lethal, they’ll still have to adhere to the same old ethical standards that guide today’s warbots.

Let’s backtrack to last month, when the U.S. Army announced an Industry Day coming up on March 12, while asking private companies for white papers detailing “sources capable of providing technical solutions” on how to better support ATLAS, the Army’s AI-powered, semi-autonomous targeting system for armed ground robots.

In part, here’s what the Army is hoping to gain:

“The Army has a desire to leverage recent advances in computer vision and Artificial Intelligence / Machine Learning (AI/ML) to develop autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process.”

Apparently, some of the Army’s language frightened off would-be partners, presumably the ones “that do not traditionally do work with the U.S. Army.”

So in response, according to Defense One, the Army has added a disclaimer to the invitation, which is actually just restating the “ethical standards” by which the Defense Department has always expected their warbots to live (sort of).

The added language states, “Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.”

See, nothing to worry about! *gulp*

(via Gizmodo)