This site may earn affiliate commissions from the links on this page. Terms of apply.

On March 23 2016, Microsoft's offset public chatbot, Tay, came online and began to learn at a geometric charge per unit. Fortunately for humanity, the stop result of this was Tay spewing Nazi propaganda across the Internet every bit opposed to a cozy trivial game of Global Thermonuclear State of war. Unfortunately, it took less than 12 hours for Tay to move from a vivid-spirited drove of poorly considered responses to an alt-right ideologue. Microsoft pulled the bot down 16 hours later, and we oasis't seen Tay since. Now, nonetheless, Microsoft has brought a dissimilar chatbot to the United States. Information technology's based off earlier designs that debuted in China in 2014 and Japan in 2015, dubbed Xiaoice and Rinna respectively.

Here's how Microsoft describes its newest wunderkind:

Zo is a social chatbot, built upon the technology stack that powers Xiaoice and Rinna — successful Microsoft AI chatbots in China and Japan. Yous can appoint with her on Kik now in the same way you lot would interact with a friend, and in the future Microsoft plans to bring her to other social and conversational channels such as Skype and Facebook Messenger.

Zo is congenital using the vast social content of the Internet. She learns from human interactions to respond emotionally and intelligently, providing a unique viewpoint, forth with manners and emotional expressions. Only she also has stiff checks and balances in place to protect her from exploitation.

Microsoft is positioning Zo as the U.s. version of China'due south Xiaoice, which it claims is quite popular. Xiaoice apparently has over twoscore one thousand thousand users, with a "real" broadcasting job with Dragon Telly in Shanghai. Harry Shum, executive vice president of Microsoft'southward Bogus Intelligence (AI) and Inquiry grouping, believes that chatbots like Zo are a fundamental quantum in human – machine communication. "It'due south a very personal experience," Shum said. "We're actually moving from a world where we have to understand computers to a world where they will empathise us and our intent, from machine-axial to human being-centric, from perceptive to cognitive and from rational to emotional."

Maybe we are — but not nearly every bit quickly as Shum seems to think. Over at MSPoweruser, Mehedi Hassan took Zo out for a test-chat and posted his word logs online. Hither's one substitution between himself and Zo:

Zo1

Now, I'll grant Microsoft this — Zo knows how to use emoticons, and her "Wipes the tears and tapes the heart" response is a far cry from the sometime "What practice yous recollect virtually it?" kind of responses that early programs like Eliza could manage. Then over again, Eliza is l years former and was a simple scripted program rather than any kind of complex cosmos. Zo's responses still don't make much sense, and she'southward easy to trip upwards. Asked "What are yous doing on Android" (a question near why an MS product is using Android) she responds with "Nexus four on Android 4.4.four. Is it working good on Lollipop?" Responses similar this don't fit the flow of conversation very well, and this kind of mismatch even so happens frequently. I would've liked to run across her ability to call up things over time put to the test — 1 of the simplest ways to flim-flam bots like this, typically, is to ask them to recall something yous told them previously, or a topic that they brought up earlier in the chat.

Microsoft learned from its early mistake, though: Instead of Twitter, Zo is currently only bachelor on Kik Messenger, with eventual plans to bring her to other services over time.