In Skynet news, researchers at Facebook have created an artificial intelligence system that was able to develop its own language to communicate with each other -- a language we can't understand. The researchers have since pulled the plug on the experiment after fears they could lose control of the system. *takes hammer to computer* What? It just called me fat.
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There's not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.
They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.
Here's an idea: start building your anti-robot bunkers now, because there's no stopping them. Sure researchers were able to pull the plug this time, but what about next time? We might not be so lucky. Especially if somebody who's way less responsible is in charge of making sure the AI doesn't go rogue. My only job could be pushing a big red button if something goes wrong and you'd still find me in the break room making a sandwich with coworkers' leftovers when the killer robots break out of their holding cells.
Thanks to Jenness S, who agrees we need to turn back before its too late.