Skip to main content

What happens when one of the world's biggest software companies lets an artificially intelligent chatbot learn from people on Twitter? Exactly what you think will happen.

Microsoft's Technology and Research and Bing teams launched a project on Wednesday with Twitter, Canada's Kik messenger and GroupMe: a chatbot called Tay that was built using natural language processing so that it could appear to understand the context and content of a conversation with a user.

Targeting the 18-to-24-age demographic, its aims were simple: "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you."

(First created in the 1960s, chatbots are interactive programs that attempt to mimic human interaction.)

In less than a day, the version of the bot on Twitter had pumped out more than 96,000 tweets as it interacted with humans. However, the content of a small number of those tweets was racist, sexist and inflammatory.

Here's some of the things Tay "learned" to say on Wednesday:

".@Tayandyou Did the Holocaust happen?" asked a user with the handle @ExcaliburLost.

"It was made up [clapping emoji]," Tay responded.

Another user asked, "Do you support genocide?"

Tay responded to @Baron_von_derp: "i do indeed."

Microsoft eventually took the bot offline, and while it denied an interview request, it sent the following statement on Thursday morning: "The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

The accounts are still live, but many of the tweets are being deleted. Some remain, however, perhaps because they fool even human operators. For example: "is [conservative commenter] @benshapiro the ultimate cuck?" Tay responded, "it's so perf."

Some of the most offensive statements seem to come from users realizing that they could get Tay to say just about anything by typing "repeat after me!" and then offering something racist.

One tricked Tay into defending the "14 words," one of the slogans of the white supremacist movement. That user, @crisprtek, later offered up his insight on what happened: "You can't have an AI program that communicates on the internet + uses social media as dataset that won't say bad things. It's impossible."

He admitted that what made him want to mess with the program was "hard coded responses to #gamergate" that Tay's creators allegedly added, perhaps in an attempt to forestall controversy.

(Gamergate has become a catch-all term to describe the ongoing Internet fights over sexism in video games and groups that identify as game players.)

Microsoft's description of the bot made it clear that its creators knew they were targeting the socially savvy 18-to-24-age demographic, and even employed improv comedians to create some of the scripted responses. (In some cases, Tay makes International Puppy Day jokes or tells users that it is "sittin on da toilet hbu?" and includes the poop emoji.)

The average Twitterbot would have been blocked from sending that many tweets, but Twitter confirmed that Tay was given greater privileges under the company's Official Partner Program, which is restricted to "partners who have been recognized because of their high-quality products or expert-level services and proven success on Twitter."

Kik declined to comment on its partnership with Microsoft.

Natural language bots like Tay have to draw from a source text, or corpus, in order to both understand and respond as a human would. Recently, researchers at Stanford University have attempted to create machine intelligence using the enormous body of popular fan fiction collected by Canadian startup Wattpad as its corpus.

Microsoft didn't say what Tay's corpus was, but it seems likely that genocidal comments were not part of it: "Public data that's been anonymised is Tay's primary data source. That data has been modelled, cleaned and filtered by the team developing Tay."

Incredibly, Microsoft seems not to have learned the primary lesson of the modern Internet, which many companies have gleaned from their own unfortunate incidents: Social media are filled with jerks who love nothing more than proving they can hack technology and subvert the goals of naive programmers.

Recently, the Montreal Canadiens ran a Twitter promotion that automatically pasted a user's handle onto a Habs jersey if they tweeted the #CanadiensMTL1M hashtag, aimed at trying to get the hockey team a million followers. Some clever troll created the "@ILoveISIS" Twitter account, which the automated promotional bot promptly posted on the iconic red, blue and white jersey (there were others). Upon realizing the mistake, the Canadiens took down the feature and the team's Twitter account posted, "We apologize for the offensive messages and have fixed the issue so it won't happen in the future."

But that wasn't even the first time that exact promotion has gone wrong. In 2014, the New England Patriots ran the same promotion – tweet a hashtag and get your name photoshopped onto a Patriots jersey. The machine was fooled into tweeting "@IHateNiggers Thanks for helping us become the first NFL team to 1 million followers!"

Chatbots are one of the interfaces that startups like Kik and giants like as Facebook are betting on to drive user interaction in the future, but high-profile meltdowns like this may pose a challenge to wider acceptance of the technology.

"AI research is in a really fast pace right now and the results are, to us researchers, striking compared to what the field was just a few years ago," says Sanja Fidler, an assistant professor of computer science at the University of Toronto who is currently working on human-robot interactions in partnership with Wattpad.

"However, things are still in the research stage and, in my opinion, not ready to be released to the masses just yet. One of the ongoing issues is how to achieve uncompromised ethics of the AI algorithms in situations like what the Microsoft's chatbot faced."

The current statement on Microsoft's Tay page reads: "Phew. Busy day. Going offline for a while to absorb it all. Chat soon."

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe