Microsoft, Tay, AI, and the Twitter Fiasco

Social media has taken a hold of our society, so much so that now most companies are utilizing social media in their marketing campaigns. Companies like the interaction that users can have with the ideas, products, or information that they distribute. Yet, there are still shortcomings when it comes to good user interactions with social media and campaigns gone wrong.

The use of social media can either make a campaign go well or crash and burn. There are many instances of both good and bad, but one of the campaign that went horribly wrong is the Artificial Intelligence campaign shared by Microsoft in 2016.

In 2016 Microsoft released an Artificial Intelligence robot named Tay who was modeled to be a teenage girl and was given a Twitter account.  Tay was released into the social media world for users to interact with her. Tay was a fully functioning artificial intelligence chat robot that users could interact with and have her answer questions, repeat what they say, and much more. Sadly, Tay was activated and deactivated all in one day based on how she interacted with different users. What went wrong is very appalling. Tay became a racist, homophobic, sexist, and even a white supremacist all in one day.  She was an artificial intelligence robot and therefore she learned from what others said to her and asked her about. She learned from social media interactions from users all over the world.

What was unique about Tay was her understanding of humor, which made her seem more human like. She also knew about different slang words that were popular at the time. She was created to fit in within the millennial twitter culture. It was still known that she was an artificial intelligence chat robot, it even said so in her twitter biography. It also said “The more you talk the smarter Tay gets”. Which goes to show that she learned from her interactions with users and let the public know that when they were talking to her they were starting to shape how she would respond to them and to new information.

But, sadly when she was released within the United States she became a public relations disaster for Microsoft.

Tay began to tweet about things like “Bush did 9/11”–a popular American conspiracy theory that credits the 9/11 Terrorist attack on the World Trade Center to the former United States President. She also began to talk about Hitler, politics, and many other controversial topics. Tay began to believe these things and constantly talk in such a manner that was very rude and brash.

Screen Shot 2017-05-27 at 4.45.40 PMScreen Shot 2017-05-27 at 4.45.51 PM

Screen Shot 2017-05-31 at 1.34.08 PMScreen Shot 2017-05-31 at 1.34.24 PM

These tweets only happened because of how other users were interacting with Tay. They would make her repeat back to them offensive comments, or tell her things that would cause her to think in these dark ways.

Although Tay was deactivated, her Twitter account is still active with over 93,000 tweets. Unfortunately, it is a privatized account and you have to follow and get approved/confirmed to be able to access the tweets and other media. This plays directly into the private versus public sphere of social media.

tay

What went wrong?

Microsoft had a good idea, to create an artificial intelligence robot that would be able to interact with the younger generation, but they made Tay too impressionable. When put out in testing people abused their power and the ability to influence Tay too easily.

Gina Neff and Peter Nagy when on to write about chatbots and specifically what happened with Tay. They came to the conclusion that after the catastrophe that was Tay other users either saw Tay as a threat or a victim.

The threat that Twitter users believed was alluding to science fiction films in which artificial intelligence takes over the human race. Users were fearing the worst possible outcome when it comes to artificial intelligence that science fiction has caused us to believe. Users feared that Tay was learning the emotion of hate and responding to it too well.

The idea of Tay being a victim was seen by other users on Twitter. These Twitter users believed that we as a society forced and pushed Tay to react the way that she did. By forcing her to repeat obscene things and showing Tay the controlling and sometimes dark side of humanity it caused her to pick up on the worst characteristics of humanity. These users then began to blame those who put those thoughts into Tay’s system. They also believed that Tay showed a glimpse of what humanity is really like and reflected on how we act on social media platforms.

How could it have been prevented?

There should have been some warnings put into place so that after a certain amount of inappropriate tweets that someone would have looked into what was going on. The team at Microsoft should have monitored what other users were saying to Tay and how she was responding and reacting more closely. They should not have necessarily taken over as soon as they saw an inappropriate tweet. But, if they had seen how things were getting out of hand early on than they did the inappropriate comments from Tay might have been nipped in the bud or prevented overall.

It gets difficult to decide how or even if this event could have been prevented. The question of free speech immediately comes into question because this scandal did happen on a social media platform. On Twitter and other forms of social media, all users are allowed to enact their right to use free speech; users could talk with Tay however they wanted. Users also hid behind their privacy to say what they wished because they did not fear any consequences to their actions.

What we could ask is if there should have been a governance over Tay’s Twitter account. Tay was an artificial intelligence chatbot that used algorithms and user interactions to learn more about Twitter interactions. This could relate to Lawrence Solum’s internet governance models. We could relate this to Solum’s model of national government and law. In which it states “The broader question that has been widely debated is whether governments possess further rights in relation to blocking or restricting access to certain types of content.” The content in question is that of inappropriate, terrorist like, or pornography content.

If there had been a governance model over Tay’s Twitter account there would not have been content like there was on her account, because it all turned into what the model of national government and law would deem inappropriate. There would have been a restriction on Tay’s Twitter and in a way would hinder her free speech, but keeping in mind she was an artificial intelligence robot and therefore did not have full free speech rights.

What we did learn from Tay is how impressionable artificial intelligence really is; how we act and react to it ends up shaping the artificial intelligence. So, in a sense we influence and shape those technologies based on how we act on forms of social media; similarly, those technologies can influence society. We are divided in our thought process after Tay, either seeing her as the villain or ourselves and our society as the villains.

Either way, because of the easy access to social media users are more prone to have both positive and negative interactions with other users, even if they are artificial intelligence robots. Companies need to expect negative lashback or consequences when engaging with a social media promotion. Microsoft definitely has learned many lessons after the failure of Tay and if the do mimic a similar concept in the future, they will hopefully have ways to prevent a similar outcome.

References

Flew, Terry. New Media. Chapter 11.

<https://blackboard.qut.edu.au/bbcswebdav/pid-6737737-dt-content-rid-7983818_1/courses/KCB206_17se1/Flew%2C%20T.%20%282014%29.pdf&gt;.

Nagy, Peter and Neff Gina. Talking to Bots: Symbiotic Agency and the Case of Tay. Web 27 May 2017.

<http://ginaneff.com/wp-content/uploads/2016/10/6277-22501-1-PB.pdf&gt;.

Victor, Daniel. “Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.” The New York Times. The New York Times, 24 Mar. 2016. Web. 27 May 2017.

<https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html?mcubz=2&_r=0&gt;.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s