Shocking: Microsoft's AI Tweetbot Learns How To Be A Racist, Sexist Monster In Less Than 24 Hours

March 25, 2016

microsoft-ai-chatbot.jpg

In news that shouldn't surprise anybody who's ever opened a browser window in their life, Microsoft's 'millennial' artificial intelligence chatbot Tay learned from people jackholes interacting with her online how to be a racist, sexist monster in less than 24 hours. Honestly, I'm surprised it even took ten minutes. People must have been typing slow.

"The AI chatbot Tay is a machine learning project, designed for human engagement," a Microsoft spokesperson said in a statement. "It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."


A feature of the AI chatbot was that it could learn to talk just like the people it was speaking with. So when people started feeding her hate-filled vitriol, she started spewing it back

How Microsoft didn't see this coming from a mile away is beyond me. You're supposed to know the internet. And you wonder why nobody uses your browser. No word how many people are getting fired after this fiasco, but if you ever need somebody to pretend to be an AI chatbot, I'm you guy, Microsoft. Hello, world. Hi, how are you? *two minutes of internet trolls later* EAT SHIT AND DIIIIIIIIIE.

Thanks to everyone who sent this, the majority of whom expressed this is why we can't have nice things.

Previous Post
Next Post