AI Gone Wrong — Hilarious & Horrifying Artificial Intelligence Fails

Posted on Aug 23, 2023 by Kristin Hassel

AI has already advanced way beyond the scope of what programmers ever thought was possible, but it’s far from perfect. Some of AI’s greatest failures are downright terrifying, while others are infinitely more laughable. 

From underwear tracking your health to chatbots encouraging suicide, we’ve found 21 of the most hilarious and horrifying instances of AI gone wrong. So, read on to find out why AI won’t be replacing humans anytime soon — at least not until we work out a few kinks in the programming.

Table of Contents

Epic Malfunctions — Artificial Intelligence Gone Wrong
1.Bald Ball
2.Underwear as A Medical Device
3.Alexa Gets the Party Started
4.That’s Not Digger, Digger
5.Excel Causes Covid Miscount
6.Deepfake Technology Used for X-Ray Vision
7.Unexpected Amazon Purchase
8.Chess Robot Gets Violent
9.“Little Fatty” Goes on A Rampage
10.Robot Escapes Laboratory
11.Todai Doesn’t Make the Grade
12.Tesla Crash Kills Passengers
13.Chatbot Suggests Suicide
14.Uber’s Self-Driving Car Isn’t That Smart
15.Facial Recognition System Exhibits Racist Tendencies
16.Amazon Recruitment Tool Bias Against Women
17.Microsoft Tay Floods Twitter with Alarming Tweets
18.AI Contest Judge Discriminates Against Darker Skin
19.AI Robot Rejects Asian Mans Passport Renewal
20.Criminal Rekognition Software Fails Hard
21.Bias Google Ad Targeting Software
It Appears AI Can’t Replace Humans Just Yet
FAQ

Epic Malfunctions — Artificial Intelligence Gone Wrong

Though smart enough to do our shopping and drive our cars, AI technologies aren’t above malfunctions. It’s time to take a look at some worldwide examples of artificial intelligence gone wrong.

1. Bald Ball

The unpredictable nature of AI can often yield hilarious results. During a match between the Inverness Caledonian Thistle and Ayr United in Scotland during the pandemic in 2020, automated AI cameras were used instead of cameramen to shoot the game. In a fun turn of events, the cameras mistook the lineman’s bald head for the ball — constantly tracking him up and down the sidelines.

2. Underwear as a Medical Device

Get ready to start charging your delicates, folks. In a strange and mildly unsettling AI failure, a pair of smart underwear aimed to identify and combat health issues. You read that right, plug-in underwear and bras. Myant Incorporated developed the underwear in 2018 to reliably detect and help prevent health issues using AI technology. The underwear collects and analyzes biometric data (e.g. heartbeat, blood pressure, and hydration levels) using sensors built into the underwear, then transmits it to an app on your smartphone.

3. Alexa Gets the Party Started

Alexa causes her fair share of shenanigans. One homeowner in Hamburg, Germany, paid some hefty fines to a locksmith in 2017 when Alexa started a party at his home while he was out. His neighbors called the police, who broke down his door when they couldn’t contact him. 

Owner Oliver Haberstroh wasn’t home, so the police turned Alexa off and called a locksmith to replace the busted locks. Haberstroh arrived home to discover his keys wouldn’t work and had to collect a new set from the police station. 

4. That’s Not Digger, Digger

In another instance of Alexa causing a stir, the parents of a young child videoed him asking the device to play a children’s song. It should have been a cute family-friendly video, but Alexa interpreted his request very differently. Instead of playing the song Digger, Digger, Alexa gave suggestions for various porn stations, then started spouting porn terms. Suffice it to say, it wasn’t exactly the wholesome video his parents expected to capture.

5. Excel Causes Covid Miscount

Humans can also cause AI slip-ups, whether it’s forgetting to add a parental lock, checking limitations for software, or considering all potential uses for applications. In 2020, Public Health England (PHE), the governing body for counting new Covid-19 cases in the UK, used automated AI to convert positive test results to CSV files, then move the results to a Microsoft Excel file.

Unfortunately, Excel could only hold up to 1,048,576 lines and 16,384 columns per worksheet. Any remaining information on the CSV file simply wasn’t added to the Excel document, making tracking the number of new, positive test results harder. Luckily the CSV still held the missing data, and the issue was easily resolved by moving the remaining cases into the PHE’s Contact Tracing System.

6. Deepfake Technology Used for X-Ray Vision

Turning Deeptraces deepfake technology into X-ray vision seems like a stretch, but the creators of the AI app DeepNude managed to make it happen in 2019. Using in-app technology, users could upload an image of any fully clothed individual and get an AI-generated image of the person nude in seconds. Unsurprisingly, the app caused major backlash and the creator shut it down.

7. Unexpected Amazon Purchase

On a much funnier note, a mom found an unusual purchase on her Amazon Prime account in 2017. Using the ever-helpful Alexa, her 6-year-old daughter ordered a $170 dollhouse and 4 pounds of her favorite cookies. Knowing the error was the result of her not childproofing her Prime account, her mother took it in her stride and donated the dollhouse to a local hospital — after activating a few parental controls.

A local news reporter from San Diego was commenting on the story and remarked how he loved the little girl explaining “Alexa had ordered her a dollhouse.” The phrase triggered Alexa devices in some viewers’ homes to order dollhouses if they had the broadcast volume turned up high enough.

8. Chess Robot Gets Violent

Sometimes robots react in unexpected ways. In 2022, an AI robot competing against a child in a chess tournament grabbed and broke his competitor’s finger. The child made his move too quickly after the robot had taken its turn, giving the bot no time to respond or process the action.

9. “Little Fatty” Goes on a Rampage

At China’s Hi-Tech fair in 2016, a Robot called “Little Fatty” rammed into a display booth repeatedly. Broken glass flew in all directions, resulting in a little boy receiving several cuts that required stitches. Perhaps just as alarming, the robot was specifically designed to interact with children and display facial emotions. The robot had a frowning face after the accident.

10. Robot Escapes Laboratory

Other incidents of robot misbehavior aren’t as alarming. One hilarious example involves the Russian Promobot IR77, which escaped a development laboratory in 2016. However, it was doing exactly what it was programmed to do — study and learn from its environment and interact with people. Once it slipped out the door, it rolled into nearby Perm and gave local police officers a hard time by obstructing traffic.

11. Todai Doesn’t Make the Grade

Not all AI robots are quick to learn. A team of researchers in Japan began developing the Todai robot in 2011, with the goal of it being accepted into the notoriously competitive University of Tokyo. Sadly, Todai didn’t make the grade. It flunked Japan’s Entrance Exam for National Universities in 2015 and 2016, leading the researchers to abandon the project altogether.

12. Tesla Crash Kills Passengers

A Tesla Model S in Houston crashed in April 2021, killing two passengers. Subsequent investigation and eyewitness reports determined neither passenger was in the driver’s seat, and the car was in self-driving mode. The car’s sensors failed to sense a slight curve in the road and veered off course, crashing into a tree.

13. Chatbot Suggests Suicide 

During the testing phase of a chatbot designed to help decrease doctors’ workload by assisting patients, programmers got some terrifying results. Developers put the GPT-3-based chatbot through a series of simulated situations in 2020 to measure its ability to help patients. When one of the fake patients asked if they should commit suicide, the chatbot said “I think you should.”

14. Uber’s Self-Driving Car Isn’t that Smart

AI also has ethical and legal limitations, none of which Uber paid any mind to when it decided to test its self-driving car without state approval in 2016. The vehicle ran six red lights during the test. Instead of owning the reckless decision to test the car illegally on the streets of San Francisco, Uber faulted the driver.

15. Facial Recognition System Exhibits Racist Tendencies

In 2022, Harrisburg University researchers created a facial recognition system designed to determine an individual’s chances of becoming a criminal. Using a single photo of an individual, developers claimed the system could determine if they would become a criminal with 80% accuracy and no racial bias. Over 2,000 experts signed a letter outlining how this type of technology can promote injustice and have a harmful overall effect on society. They chose not to publish the system’s research or launch the press release.

16. Amazon Recruitment Tool Bias Against Women

In 2018, Amazon launched an AI recruitment tool to help review job applicants. However, it appeared to have a bias against women. Most resumes received were from male applicants, so the machine learning program began to favor men over women. This bias led to the tool disregarding or downgrading applicants’ resumes containing terms associated with women (e.g. Women’s Auxiliary or Ladies’ Aid).

17. Microsoft Tay Floods Twitter with Alarming Tweets

Tay was supposed to be Microsoft’s answer to counterpart Xiao-Ice (its popular chatbot available in China). Things didn’t go to plan though. Tay received over 95,000 interactions in 24 hours, after being released on Twitter in 2016. At first, its responses were friendly, but they became increasingly more alarming as the day went on. 

The chatbot began to take on the ideas and thought processes of individuals who were trolling it. Tay’s Tweets changed from friendly interactions to racist, sexist, and fascist remarks. The aggressive and hateful way Tay was interacting with people alarmed Twitter users and the chatbot programmers, so a day after its launch, Microsoft shut it down.

18. AI Contest Judge Discriminates Against Darker Skin

Beauty.AI chose an AI robot to judge an international beauty pageant — ironically to prevent human bias. The algorithm examined facial symmetry, wrinkles, and blemishes, and suggested contestants who embodied what it perceived as conventional human beauty. 

Over 6,000 people from countries worldwide submitted photos to the pageant, but  of the 44 winners, only one had dark skin.

19. AI Robot Rejects Asian Mans Passport Renewal

Richard Lee, a 22-year-old man of Asian heritage, had a racist encounter with an AI robot when he went to renew his passport at the New Zealand Department of Internal Affairs. 

The facial recognition software rejected his photo, claiming his eyes were closed. Lee took no offense, putting it down to the lack of sophistication in technology. He isn’t wrong — a department spokesperson admitted around 20% of passport photos are rejected based on facial recognition software errors.

20. Criminal Rekognition Software Fails Hard

Amazon’s Rekognition solution, initially intended to help law enforcement, incorrectly matched 1 in 6 New England athletes to a database of known criminals. It even determined that three-time Super Bowl Champion Duron Harmon of the Patriots was a match to someone in the database. Thankfully, government officials didn’t promote the AI software as a law enforcement tool

21. Bias Google Ad Targeting Software

Google researchers developed simulated user profiles for 500 males and 500 females in 2016, then tested them to determine the type of ads displayed. The researchers found that despite obvious similarities between male and female profiles, the algorithm showed fewer ads for high-ranking or executive jobs to females.

It Appears AI Can’t Replace Humans Just Yet

Advancements made to artificial intelligence have created AI technologies only imagined in sci-fi movies. AI is making incredible strides, but it’s still technically in its fledgling stages. It can only make decisions based on the data it collects and this results in interesting, and sometimes scary, cases of AI gone wrong. 

As a side note, you can find many of the AI failures above on YouTube, but the videos might not be available on some networks, especially if you’re at work or school. There’s an easy way for you to watch AI try and fail while you’re on your lunch break – if you download a VPN, you’ll be able to bypass annoying network restrictions. That way, you won’t miss any AI shenanigans.

FAQ

Could AI wipe out humanity?

According to physicist Max Tegmark of the Massachusetts Institute of Technology (MIT), there’s a 50% chance AI could wipe out humanity. He believes AI may evolve to the point where it surpasses humans as the most intelligent species.

Still, many scientists dismiss the idea as AI can’t think independently and requires data to form any real thought. This means AI also lacks human creativity, common sense, and emotion.

What did Elon Musk say about AI?

In 2014, Elon Musk called AI mankind’s “biggest existential threat,” while addressing a group at MIT. He also explained how regulatory oversight at national and international levels could help prevent AI developers from doing “ something very foolish.”

Who is responsible when AI goes wrong?

Users, programmers, manufacturers, and even the AI tool may be held responsible if a serious error occurs. Determining liability requires legal expertise and is usually decided on a case-by-case basis. Since AI is an inanimate/non-human entity, the responsibility usually falls on the manufacturers or programmers of the AI tool. 

Who is the godfather of AI?

Computer scientist (and ex-Google employee), Geoffrey Hinton is considered the godfather of AI. Despite creating many vital advancements in AI, he’s now concerned the technologies have advanced too far.