Share this article

Latest news

With KB5043178 to Release Preview Channel, Microsoft advises Windows 11 users to plug in when the battery is low

Copilot in Outlook will generate personalized themes for you to customize the app

Microsoft will raise the price of its 365 Suite to include AI capabilities

Death Stranding Director’s Cut is now Xbox X|S at a huge discount

Outlook will let users create custom account icons so they can tell their accounts apart easier

Microsoft: We need to talk about Tay

4 min. read

Published onMarch 27, 2016

published onMarch 27, 2016

Share this article

Read our disclosure page to find out how can you help Windows Report sustain the editorial teamRead more

As you may have heard, Microsoft created a chat bot. In itself, this would seem like interesting, but unremarkable, news. The bot, named @TayandYouon Twitter, used artificial intelligence to respond to questions and statements from other users.

In the beginning, the bot worked fine. Microsoft had designed it to respond like a teenager — it was,according to Microsoft’s press release, aimed at 18-24-year-olds — by “learning” from everything it heard, saw, or read.

However, the internet had its way and Tay, a lovable and sweet creature, was turned into a racist, bigoted presence on the Internet.

Some of the tweets were startling and, if Microsoft had been thinking correctly, would never have happened. Many of the more offensive examples have been deleted from Twitter — probably because they were being endlessly retweeted — but screenshots are availablehere.

Microsoft was forced onto the back foot, issuing a series of apologies which escalated from statements to news websites to afull-blown blog post by Peter Lee, the corporate vice president of Microsoft Research.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Lee.

The cause of Tay’s misery, as wasdiagnosed by other outlets, was a “coordinated attack by a subset of people [who] exploited a vulnerability in Tay.”

If you asked Tay to repeat something, she would. Some users got a kick out of tweeting offensive messages to her and asking her to repeat them. From here, it seems, Tay absorbed these messages and started repeating them.

The bot issued thousands of tweets in the time it was active — about 4,000 an hour — and many of them were silly and cute, but the interest came from the offensive material and that is what @TayandYou will be remembered for.

I sympathise with Microsoft over this mess up. As Iwrote on Twitter, the idea behind Tay was good. Teaching an artificial intelligence is hard, but exposing it to a collective consciousness, like social media, can help it learn new things. The company has been running another similar test in China,called Xiaoice, which also learns from social media.

However, it’s clear to anyone who has ever been on Twitter that it is not a safe, warm environment to nurture an AI and teach it new things.

The rise of @realDonaldTrumpand thehistoric complaints from women about abuseshould have indicated to Microsoft that many users were simply looking to be mean and, if the opportunity was presented, they would be.

Twitter is,according to the company, the “free speech wing of the free speech party,” which is fine — and should be applauded, giving the duress some Internet users live under — but it also doesn’t make it an ideal place to launch a chat bot that responds, repeats, and reacts to other users.

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” wrote Lee, succinctly summering what went wrong. Unfortunately, it was “this specific attack” that happened.

Tay’s offensive tweets were not exactly self-contained, either, which is why this experiment was such a disaster for Microsoft from a press perspective.

The tweets, initially picked up by the technology media, were then grabbed by the mainstream media thanks, in large part, to their presence on Twitter. Outlets fromThe GuardiantoThe New York TimestoThe New Yorkerall wrote up the tale of @TayandYou and most featured it prominently on the front page.

Microsoft had avery bad week last week, but this one is arguably worse: A programme that Microsoft created suddenly became violently offensive on a platform that transmits a message like no other. It was clear from the account’s bio — which reads: “The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill!” — who had made it and who, ultimately, was responsible for its tweets.

Microsoft isn’t backing away from its AI efforts after Tay, but it has learned something. “To do AI right, one needs to iterate with many people and often in public forums,” wrote Lee. “We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process.”

Tay will go down as one of Microsoft’s bigger embarrassments because it encompasses both the public and technology spheres, which do not usually intersect. AI is one of thehottest industry trends right nowand Microsoft is one of theleading lightsand so all eyes were on the bot, but things like this really don’t look good.

Radu Tyrsina

Radu Tyrsina has been a Windows fan ever since he got his first PC, a Pentium III (a monster at that time).

For most of the kids of his age, the Internet was an amazing way to play and communicate with others, but he was deeply impressed by the flow of information and how easily you can find anything on the web.

Prior to founding Windows Report, this particular curiosity about digital content enabled him to grow a number of sites that helped hundreds of millions reach faster the answer they’re looking for.

User forum

0 messages

Sort by:LatestOldestMost Votes

Comment*

Name*

Email*

Commenting as.Not you?

Save information for future comments

Comment

Δ

Radu Tyrsina