For all you gamers out there who may be thinking you’ve been spending a little too much time in the virtual world, we’re here to tell you: don’t stop now, because our national security may depend on you.
Yes, in the great sci-fi tradition of The Last Starfighter and Ender’s Game, the U.S. military is now looking to gamers to help prepare for real-world action. There’s currently a U.S. Defense Advanced Research Projects Agency (DARPA) funded study being done by engineers from the Artificial Intelligence Institute at University at Buffalo, New York, which is taking real data assembled from gamers gaming, in hopes of creating an advanced AI capable of more effectively coordinating swarms of military robots.
Nothing scary about that, right?
Researchers created an unnamed real-time strategy game, which Digital Trends says is reminiscent of games like StarCraft or Stellaris. In the new game, players (who are hooked up to EEG and eye tracking tech to record the brain’s activity) must utilize resources to erect units and crush enemies while coordinating “large numbers of agents on-screen to complete their mission objective”
The $316,000 grant will be used to conduct experiments with some 25 participants, who will play between six and seven games with various settings and complexity levels. Fortunately for those likely busy gamers though, each game won’t take hours to complete, just five to ten minutes.
Machine learning algorithms will inspect the data gleaned from the tested gamers, which will help develop algorithms intended to help future, complex decision-making, autonomous robots work together as cohesive units of upwards of 250 air and ground troops.
The idea stems from a branch of computer science known as swarm intelligence, which dates back to the Def Leppard-fuelled days of the late ‘80s. “It’s a real hot topic,” Souma Chowdhury, one of the project’s leads and an assistant professor of mechanical and aerospace engineering in the School of Engineering and Applied Sciences, told Digital Trends. “It’s becoming known that there are a lot of different applications which could be done by not using a single $1 million robot, but rather a large swarm of simpler, cheaper robots. These could be ground-based, air-based, or a combination of those two approaches.”
According to Chowdhury, the idea behind the study is that by watching how humans play the game, the machines will pick up on the nuances.
“Imagine walking into a classroom where there’s no teacher, and saying ‘let’s learn algebra,’” Chowdhury said. “You can learn just using exercises and textbooks. But it’s going to take a lot more time. If you have a teacher you can follow it’ll make it faster. In this case, we want to see how humans play this game and then use that to significantly speed up the A.I. in learning the behavior. Before it would be necessary to run 10,000 simulations to learn. Now we only need to run perhaps 1,000 simulations and augment this with data from humans.”
Oh good, so we’re that much closer to robot swarms working together to achieve militaristic goals. What could go wrong?!
(via Digital Trends)