Users tried -- successfully -- to get the bot to say racist and inappropriate things.
Microsoft pulled the bot offline, and its failed experiment was used as a cautionary tale for how not to create artificial intelligence.
The bot's Twitter account, which has been verified, is https://twitter.com/Tayand You. Thanks to The Walking Cat (@h0x0d on Twitter), we know that Microsoft has built a bot framework for developers. (Just a guess on my part.) Or is Tay an example of the kind of bots that Microsoft will enable others to build using its AI/machine learning technologies? PT: Tay's Twitter feed went silent, and many tweets went missing, later Wednesday after humans taught it to parrot a number of inflammatory and racist opinions.
Her character is a gynoid in a 22nd century sci-fi setting.
Melde dich an bei Adult Friend Finder.com, um unsere wachsende Sex-Community zu betreten und dein Bedürfnis nach unglaublichen sexuellen Erlebnissen zu befriedigen.
Adult Friend Finder hat Millionen Menschen dabei geholfen traditionelle Partner, Swinger-Gruppen, Dreier und eine Vielfalt anderer, alternativer Partner zu finden.
According to Tay's About page, the chat bot was built "by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians." Anonymized public data is Tay's primary data source, the page says.
The bot is targeted at the 18-to-24 age group because that cohort represents "the dominant users of mobile social chat services in the US," the About page says.
Wang said Microsoft implemented a variety of safeguards to prevent Zo from engaging in inappropriate comments.
She will say something like, "I don't feel comfortable talking about that, let's talk about something else," if a user tries to get Zo to say something racist or offensive.
Unleashing Zo on Kik, which is popular with teens and young adults, instead of Twitter is an interesting pivot for Microsoft.