Microsoft deeply sorry for offensive tirade by chatbot Tay
The bot, known as Tay, was designed to become âsmarterâ as more users interacted with it.
Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users started feeding the program, forcing Microsoft to shut it down on Thursday.
Following the setback, Microsoft said in a blog post it would revive Tay only if its engineers could find a way to prevent web users from influencing the chatbot in ways that undermine the companyâs principles and values.
âWe are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,â wrote Peter Lee, Microsoftâs vice-president of research.
Microsoft created Tay as an experiment to learn more about how artificial intelligence programs can engage with web users in casual conversation. The project was designed to interact with and âlearnâ from the young generation of millennials.
Tay began its short-lived Twitter tenure on Wednesday with a handful of innocuous tweets.
Then its posts took a dark turn.
In one typical example, Tay tweeted that âfeminism is cancer â in response to another Twitter user who had posted the same message.
Lee called efforts to exert a malicious influence on the chatbot âa co-ordinated attack by a subset of peopleâ.
âAlthough we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,â Lee wrote.
Microsoft has enjoyed better success with a chatbot called XiaoIce that the company launched in China in 2014. XiaoIce is used by about 40m people and is known for âdelighting with its stories and conversationsâ.
As for Tay? Not so much.
âWe will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an internet that represents the best, not the worst, of humanity,â Lee wrote.




