We've seen enough Terminator movies to know that giving computers too much control results in self-awareness, which means doom for humanity. We're not afraid of that scenario, because it's just fiction. Or is it?
A few have apparently not tuned in to the movies or The Sarah Connor Chronicles, because they taught their computer how to read and understand a game manual, then apply that knowledge against a computer game.
A paper written by S.R.K. Branavan, Regina Barzilay and David Silver reads, "Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built-in AI of Civilization II."
The computer started as a blank slate. The researchers, unaware they were giving Skynet the finger, "employed a multi-layer neural network" and imposed a "Monte-Carlo search algorithm" then gave it "linguistic knowledge."
According to The Escapist,
"...[the computer] gained knowledge as it progressed by comparing words on the screen with words in the manual and searching surrounding text for associated words, slowly figuring out what they meant and which actions led to positive results.
"[A]ccording to Brown University Professor of Computer Science Eugene Charniak, 'The important point is that this was able to extract useful information from the manual...'"
What did they learn? They learned that the phrase "RTFM" applies to computers as well as people.
And what did we learn? We learned that the upcoming machine apocalypse might not begin in a government facility; it may actually start on a computer designed for game research.