Skip to main content
The Globe and Mail
Support Quality Journalism
The Globe and Mail
First Access to Latest
Investment News
Collection of curated
e-books and guides
Inform your decisions via
Globe Investor Tools
per week
for first 24 weeks

Enjoy unlimited digital access
Enjoy Unlimited Digital Access
Get full access to
Just $1.99 per week for the first 24 weeks
Just $1.99 per week for the first 24 weeks
var select={root:".js-sub-pencil",control:".js-sub-pencil-control",open:"o-sub-pencil--open",closed:"o-sub-pencil--closed"},dom={},allowExpand=!0;function pencilInit(o){var e=arguments.length>1&&void 0!==arguments[1]&&arguments[1];select.root=o,dom.root=document.querySelector(select.root),dom.root&&(dom.control=document.querySelector(select.control),dom.control.addEventListener("click",onToggleClicked),setPanelState(e),window.addEventListener("scroll",onWindowScroll),dom.root.removeAttribute("hidden"))}function isPanelOpen(){return dom.root.classList.contains(}function setPanelState(o){dom.root.classList[o?"add":"remove"](,dom.root.classList[o?"remove":"add"](select.closed),dom.control.setAttribute("aria-expanded",o)}function onToggleClicked(){var l=!isPanelOpen();setPanelState(l)}function onWindowScroll(){window.requestAnimationFrame(function() {var l=isPanelOpen(),n=0===(document.body.scrollTop||document.documentElement.scrollTop);n||l||!allowExpand?n&&l&&(allowExpand=!0,setPanelState(!1)):(allowExpand=!1,setPanelState(!0))});}pencilInit(".js-sub-pencil",!1); // via darwin-bg var slideIndex = 0; carousel(); function carousel() { var i; var x = document.getElementsByClassName("subs_valueprop"); for (i = 0; i < x.length; i++) { x[i].style.display = "none"; } slideIndex++; if (slideIndex> x.length) { slideIndex = 1; } x[slideIndex - 1].style.display = "block"; setTimeout(carousel, 2500); }

What happens when one of the world's biggest software companies lets an artificially intelligent chatbot learn from people on Twitter? Exactly what you think will happen.

Microsoft's Technology and Research and Bing teams launched a project on Wednesday with Twitter, Canada's Kik messenger and GroupMe: a chatbot called Tay that was built using natural language processing so that it could appear to understand the context and content of a conversation with a user.

Targeting the 18-to-24-age demographic, its aims were simple: "Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you."

Story continues below advertisement

(First created in the 1960s, chatbots are interactive programs that attempt to mimic human interaction.)

In less than a day, the version of the bot on Twitter had pumped out more than 96,000 tweets as it interacted with humans. However, the content of a small number of those tweets was racist, sexist and inflammatory.

Here's some of the things Tay "learned" to say on Wednesday:

".@Tayandyou Did the Holocaust happen?" asked a user with the handle @ExcaliburLost.

"It was made up [clapping emoji]," Tay responded.

Another user asked, "Do you support genocide?"

Tay responded to @Baron_von_derp: "i do indeed."

Story continues below advertisement

Microsoft eventually took the bot offline, and while it denied an interview request, it sent the following statement on Thursday morning: "The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

The accounts are still live, but many of the tweets are being deleted. Some remain, however, perhaps because they fool even human operators. For example: "is [conservative commenter] @benshapiro the ultimate cuck?" Tay responded, "it's so perf."

Some of the most offensive statements seem to come from users realizing that they could get Tay to say just about anything by typing "repeat after me!" and then offering something racist.

One tricked Tay into defending the "14 words," one of the slogans of the white supremacist movement. That user, @crisprtek, later offered up his insight on what happened: "You can't have an AI program that communicates on the internet + uses social media as dataset that won't say bad things. It's impossible."

He admitted that what made him want to mess with the program was "hard coded responses to #gamergate" that Tay's creators allegedly added, perhaps in an attempt to forestall controversy.

(Gamergate has become a catch-all term to describe the ongoing Internet fights over sexism in video games and groups that identify as game players.)

Story continues below advertisement

Microsoft's description of the bot made it clear that its creators knew they were targeting the socially savvy 18-to-24-age demographic, and even employed improv comedians to create some of the scripted responses. (In some cases, Tay makes International Puppy Day jokes or tells users that it is "sittin on da toilet hbu?" and includes the poop emoji.)

The average Twitterbot would have been blocked from sending that many tweets, but Twitter confirmed that Tay was given greater privileges under the company's Official Partner Program, which is restricted to "partners who have been recognized because of their high-quality products or expert-level services and proven success on Twitter."

Kik declined to comment on its partnership with Microsoft.

Natural language bots like Tay have to draw from a source text, or corpus, in order to both understand and respond as a human would. Recently, researchers at Stanford University have attempted to create machine intelligence using the enormous body of popular fan fiction collected by Canadian startup Wattpad as its corpus.

Microsoft didn't say what Tay's corpus was, but it seems likely that genocidal comments were not part of it: "Public data that's been anonymised is Tay's primary data source. That data has been modelled, cleaned and filtered by the team developing Tay."

Incredibly, Microsoft seems not to have learned the primary lesson of the modern Internet, which many companies have gleaned from their own unfortunate incidents: Social media are filled with jerks who love nothing more than proving they can hack technology and subvert the goals of naive programmers.

Story continues below advertisement

Recently, the Montreal Canadiens ran a Twitter promotion that automatically pasted a user's handle onto a Habs jersey if they tweeted the #CanadiensMTL1M hashtag, aimed at trying to get the hockey team a million followers. Some clever troll created the "@ILoveISIS" Twitter account, which the automated promotional bot promptly posted on the iconic red, blue and white jersey (there were others). Upon realizing the mistake, the Canadiens took down the feature and the team's Twitter account posted, "We apologize for the offensive messages and have fixed the issue so it won't happen in the future."

But that wasn't even the first time that exact promotion has gone wrong. In 2014, the New England Patriots ran the same promotion – tweet a hashtag and get your name photoshopped onto a Patriots jersey. The machine was fooled into tweeting "@IHateNiggers Thanks for helping us become the first NFL team to 1 million followers!"

Chatbots are one of the interfaces that startups like Kik and giants like as Facebook are betting on to drive user interaction in the future, but high-profile meltdowns like this may pose a challenge to wider acceptance of the technology.

"AI research is in a really fast pace right now and the results are, to us researchers, striking compared to what the field was just a few years ago," says Sanja Fidler, an assistant professor of computer science at the University of Toronto who is currently working on human-robot interactions in partnership with Wattpad.

"However, things are still in the research stage and, in my opinion, not ready to be released to the masses just yet. One of the ongoing issues is how to achieve uncompromised ethics of the AI algorithms in situations like what the Microsoft's chatbot faced."

The current statement on Microsoft's Tay page reads: "Phew. Busy day. Going offline for a while to absorb it all. Chat soon."

Report an error Editorial code of conduct
Due to technical reasons, we have temporarily removed commenting from our articles. We hope to have this fixed soon. Thank you for your patience. If you are looking to give feedback on our new site, please send it along to If you want to write a letter to the editor, please forward to

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff. Non-subscribers can read and sort comments but will not be able to engage with them in any way. Click here to subscribe.

If you would like to write a letter to the editor, please forward it to Readers can also interact with The Globe on Facebook and Twitter .

Welcome to The Globe and Mail’s comment community. This is a space where subscribers can engage with each other and Globe staff.

We aim to create a safe and valuable space for discussion and debate. That means:

  • Treat others as you wish to be treated
  • Criticize ideas, not people
  • Stay on topic
  • Avoid the use of toxic and offensive language
  • Flag bad behaviour

Comments that violate our community guidelines will be removed.

Read our community guidelines here

Discussion loading ...

To view this site properly, enable cookies in your browser. Read our privacy policy to learn more.
How to enable cookies