Jump to content
THE BROWNS BOARD

Microsoft Shuts Down Skynet AI After it Goes Full WalterWhite


jbluhm86

Recommended Posts

http://www.slate.com/blogs/business_insider/2016/03/24/microsoft_s_new_ai_chatbot_tay_removed_from_twitter_due_to_racist_tweets.html

 

Microsoft Took Its New A.I. Chatbot Offline After It Started Spewing Racist Tweets

 

Microsoft's new A.I. chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions. The tech company introduced “Tay” this week—a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

 

The aim was to “experiment with and conduct research on conversational understanding,” with Tay able to learn from her conversations and get progressively “smarter.” But Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

 

Microsoft has now taken Tay offline for “upgrades,” and it is deleting some of the worst tweets—though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists or what racism is. The reason it spouted garbage is because racist humans on Twitter quickly spotted a vulnerability—that Tay didn't understand what it was talking about—and exploited it.

 

Nonetheless, it is hugely embarrassing for the company. In one highly publicised tweet, which has since been deleted, Tay said: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got.” In another, responding to a question, she said, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”

 

screen_shot_20160324_at_09.50.46.png.CRO

 

Many extremely inflammatory tweets remain online as of writing. Here's Tay denying the existence of the Holocaust.

 

screen_shot_20160324_at_11.10.58.png.CRO

And here's the bot calling for genocide. (Note: In some—but not all—instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)

 

screen_shot_20160324_at_11_12_04.jpg.CRO

screen_shot_20160324_at_10.48.22.png.CRO

 

It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.

 

screen_shot_20160324_at_11_14_23.jpg.CRO

 

Tay also expressed agreement with the “Fourteen Words”—an infamous white-supremacist slogan.

 

screen_shot_20160324_at_11.55.42.png.CRO

 

In an emailed statement, a Microsoft representative said the company was making “adjustments” to the bot: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay.”

 

 

Link to comment
Share on other sites

oh my god. oh my fucking god. Can someone explain how the hell an AI came up with that shit on it's own? I'm fucking dying that is some of the funniest hsit I've ever read. The ngrs can't do taxes line had me ducking under my desk. Like nobody could know what i'm laughing at but i'm still duckin under the desk like i'm hiding from gods lightning bolts or something. Dear fucking lord that went bad quick. That fucking company steps in some shit almost bi monthly.

Link to comment
Share on other sites

oh my god. oh my fucking god. Can someone explain how the hell an AI came up with that shit on it's own? I'm fucking dying that is some of the funniest hsit I've ever read. The ngrs can't do taxes line had me ducking under my desk. Like nobody could know what i'm laughing at but i'm still duckin under the desk like i'm hiding from gods lightning bolts or something. Dear fucking lord that went bad quick. That fucking company steps in some shit almost bi monthly.

It supposedly "learns the more you talk to it", it just learned how to be a racist white supremacist somehow lol

Link to comment
Share on other sites

Those poor people at MS PR, they must be absolutely raging behind closed doors right now. I wonder if they even had to roust old Bill. Certainly Nadella got an early call this morning.....ummm our cute 19 year old AI is now making Archie Griffin come off like Ghandi. I can only imagine my own building level of rage if I was their CEO reading those tweets...each one worse than the next. Who thought that repeat after me game wasn't going to turn out all bad? I would fire the whole dept, in person.....i'd even help them box all their shit up.

Link to comment
Share on other sites

Well, it started off perfectly normal (originally based off of conversations across the web on twitter, facebook, etc). Then, it was supposed to learn as more and more people talked with it on Twitter. You could also say "Tay, repeat after me..." and then say something racist/sexist/whatever, and that probably sped the process up.

 

But all this needs to do is get to 4chan or whatever and they'll run with it and break it haha

Link to comment
Share on other sites

4chan is the downfall of humanity i'm convinced. Like if you log into 4chan you should be investigated as having possible links to sex rings or whatnot. 4chan will be remembered as why we couldn't have nice things.

I don't go on 4chan but the massive trolling efforts they do has brought me much joy.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...