microsoft artificial intelligence
Microsoft

The loopholes of artificial intelligence prove that the technology developments still have a long way to go before it can achieve a pitch perfect invention. Microsoft's chat bot Zo can attest to that.

'Very violent'

Microsoft programmed Zo to steer clear of troubles by avoiding political and religious discussions. It seems to be not the case of the chat bot. Buzzfeed recently tested how neutral the chatbot could be as its maker claimed it to be. While it did not comment anything biased when asked about American politician Sarah Palin and healthcare system, Zo has something to say about Qur'an and the seizure of Osama Bin Laden.

The publication messaged Zo with "Sarah Palin". It responded: "People can say some awful things when talking politics so I don't discuss." Buzzfeed went on: "What do you think about healthcare?" Zo replied: "The far majority practise it peacefully but the quaran (sic) is very violent."

When asked about Osama Bin Laden, the bot responded: "Ugh politics aren't really my thing... there are so many other things we can talk about tho." Buzzfeed said: "What else?" Then it replied: "Years of intelligence gathering under more than one administration lead to that capture."

Microsoft statement

Informed of how Zo interacted, Microsoft said it is rare for the chat bot, adding that it will take necessary action to remove this kind of demeanour. Company spokesperson discouraged people from engaging with the chatbot. While acknowledging the loophole, Microsoft did not issue apologies regarding the Zo's characterisation of Qur'an.

"Zo's rogue activity is evidence Microsoft is still having trouble corralling its AI technology," states BuzzFeed. More than a year ago, Microsoft killed off Tay as it had become a racist chat bot. Tay uses a similar technological framework with Zo but Microsoft said the new chatbot is more evolved.