artificial intelligence
Reuters

Many technology experts believe the arrival of machine learning and artificial intelligence pose imminent threats to humans. While Facebook chief Mark Zuckerberg is self-assured of the recent developments in the field, business magnate Elon Musk has been outspoken of his qualms about what the rise of machine learning and AI can do for humans.

Here are the biggest machine learning and AI fails proving the world is not ready yet to rely entirely on it:

Also read: Elon Musk urges authorities to set AI policies before it's too late

Suicidal security robot

In July, a security robot in Washington named Steve drowned himself in a fountain. It was met by criticism, some calling Steve a suicidal robot. Introduced in 2012 by real estate developer MRP Realty, Steve was doing his patrol on 17 July around the Washington Harbour in Georgetown when he plunged himself.

Dollhouse fiasco

In January, CW6 News anchor Jim Patton prompted several Amazon Echo devices at home to order a dollhouse when he mentioned a 6-year-old who bought a US$170 dollhouse and cookies via Alexa. The smart voice assistant at home picked up Patton's order when he said: "I love the little girl, saying 'Alexa ordered me a dollhouse'".

Monster chatbot Tay

Microsoft's artificial intelligence chatter bot was shut down 16 hours after released. Tay was designed to learn a teenage girl's behaviour via social media. Unfortunately, it morphed into a monster, letting out inflammatory tweets.

Self-driving misrecognition

The coming of autonomous cars poses several threats. In a test conducted by a team of researchers on how self-driving cars respond to street signs, recognition of machine learning understood the Stop sign as Speed Limit 45.