Researchers at MIT's Computer Science and Artificial Intelligence Lab enabled an artificial intelligence system to read the instruction manual for the simulation game 'Civilization' so that it could play the game better. Aaaaaaaand now it does (with a jump in win-rate from 46% to 79%). Dammit guys, could you have opted for a game that DOESN'T have such dire ramifications for the human race? I dunno, 'Frogger' or something?
The extraordinary thing about Barzilay and Branavan's system is that it begins with virtually no prior knowledge about the task it's intended to perform or the language in which the instructions are written. It has a list of actions it can take, like right-clicks or left-clicks, or moving the cursor; it has access to the information displayed on-screen; and it has some way of gauging its success, like whether the software has been installed or whether it wins the game. But it doesn't know what actions correspond to what words in the instruction set, and it doesn't know what the objects in the game world represent.
So initially, its behavior is almost totally random. But as it takes various actions, different words appear on screen, and it can look for instances of those words in the instruction set. It can also search the surrounding text for associated words, and develop hypotheses about what actions those words correspond to. Hypotheses that consistently lead to good results are given greater credence, while those that consistently lead to bad results are discarded.
Terrifying. Humanity, meet Skynet's grandfather. Skynet's grandfather, humanity. Now quick -- somebody throw a bag over his head and we'll beat his robotic ass! "Ass, or meat?" What in the -- YOU ARE BANNED FROM GEEKOLOGIE.
Thanks to Tyler, Matt and Jordan, who all called in sick to work today to start designing their underground bunkers.