Thursday, March 24, 2016

Microsoft's Tay AI chatbot goes offline after being taught to be a racist (ZDNet)

microsofttay770x578.jpg

Microsoft's millennial-talking AI chatbot, Tay.ai, has taken a break from Twitter after humans taught it to parrot a number of inflammatory and racist opinions.

Microsoft had launched Tay on Wednesday, aiming it at people aged between 18 and 24 years in the US. But after 16 busy hours of talking on subjects ranging from Hitler to 9/11 conspiracies, Tay has gone quiet.

"c u soon humans need sleep now so many conversations today thx," Tay said in what many suspect is Microsoft's effort to silence it after Tay made several provocative and controversial posts.

Tay's artificial intelligence is designed to use a combination of public data and editorial developed by staff, including comedians. But, as an AI bot, it also uses people's chats to train her to deliver a personalized response.

Microsoft intended for Tay to "engage and entertain people" through casual conversation, but as the Guardian reports, Tay or rather Microsoft was given a sharp reminder of so-called Godwin's law of the internet, with users trying numerous ways to make it say, "Hitler was right".

Although Tay was mostly just repeating other people's comments, this data is used to train it and could affect its future responses.

In one tweet, underlining that Tay is not safe with members of the internet public as its teachers, it had this to say when asked whether Ricky Gervais is an atheist: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism".

Microsoft predicted 2016 would be the year of the bot, but apparently it didn't foresee that the internet would inevitably attempt to hijack it.

But perhaps Microsoft shouldn't have deleted all of Tay's pro-Hitler comments. As one user quipped: "Stop deleting the genocidal Tay tweets @Microsoft, let it serve as a reminder of the dangers of AI".

ZDNet has contacted Microsoft for comment but it has yet to respond.

No comments:

Post a Comment