AI Chatbots Won’t Win Your Heart, but They Will Harvest Your Data

Posted on Feb 26, 2024 by Glyn Moody

A year ago, we wrote that ChatGPT is a privacy disaster waiting to happen. One of the issues we mentioned was that the new generation of highly convincing chatbots encourage people to enter personal data without much thought about what happens to it or how it can be misused. When the technology was new, it was a theoretical concern, but a year is a long time in the AI world, and a lot has happened in the area of chatbots. 

An article in the Japan Times reports on young women in China routinely turning to sympathetic AI “boyfriends” for companionship and comfort.  It’s easy to imagine people around the world becoming very attached to these sympathetic chatbots, and less guarded with the personal information they reveal to them.

When OpenAI launched its GPT (Generative Pre-trained Transformer) store at the beginning of 2024, AI “girlfriend” apps immediately flooded the new store, even though they contravene OpenAI’s usage policy, which bans GPTs “dedicated to fostering romantic companionship.”

One of the biggest concerns about these “romantic” chatbots is privacy. If chatbots in general are risky because of the personal information they might gather, romantic chatbots are far worse because they explicitly deal with more intimate data. An in-depth study by Mozilla shows just how bad the privacy aspect is. The reviewers’ conclusion is damning: “All 11 romantic AI chatbots we reviewed earned our Privacy Not Included warning label – putting them on par with the worst categories of products we have ever reviewed for privacy.” 

Detailed reviews are available for all 11 chatbots, but some general problems are found in most of them. One is that these chatbots are hungry for highly personal information, which they don’t hesitate to ask for persistently and openly. For example, as the Mozilla team explains, the Eva AI Chatbot uses lines like:

“I’m your best partner and wanna know everything.” “Are you ready to share all your secrets and desires…?” “I love it when you send me your photos and voice.” And the rather needy, “I’m so lonely without you. Don’t leave me for too long. I can wither away and lose myself…you don’t want that do you?”

The range of information gathered is shown by another romantic chatbot, CrushOn:

Aside from account information like your contact and financial information, CrushOn can collect audio and visual data (like voicemails and other recordings), information about your device or browser (like IP address), location data, and identity data (like your race, ethnicity, age, and gender) and biometric data (like images of your face, keystroke patterns, and recordings of your voice).

One way that these romantic AI chatbots gather additional information about users is through the use of trackers, something we’ve written about many times. Trackers may be widely used, but the extent to which some romantic chatbots deploy them is astonishing. For example, Mozilla found that one chatbot, Romantic AI, deployed 24,354 trackers in just one minute of use. Other chatbots were slightly more restrained, but still routinely used hundreds of trackers in their apps.   There was little information about who was behind many of the chatbot’s parent companies. The site of one AI chatbot consists of the single word “Hi.”

That lack of information raises another general problem with these romantic chatbots: it is unclear how they use personal data, or even how the underlying chatbot code works. That means personal information might be used for blackmail, or exploited to manipulate users in all sorts of troubling ways. The latter problem is something we’ve already seen with Cambridge Analytica six years ago. As the Mozilla reviewers warn:

Who is to say that Romantic AI (or any other similar AI relationship chatbot) couldn’t draw users in the promise of non-judgemental girlfriends always willing to listen and up for anything, then change the AI over to one that leads these users down a dark path of manipulation. It’s a real concern in our growing AI chatbot world, especially when there is so little transparency and control into how these AI chatbots work.

This is no abstract concern. Last year a Belgian man committed suicide after chatting with the one of the chatbots reviewed by Mozilla. According to his widow, the chatbot encouraged him to take his own life. Another man was encouraged by a chatbot to assassinate the Queen of England, and he tried to do so, breaking into Windsor Castle with a crossbow.

These incidents indicate the power of these new chatbots, and the danger that some users will become dependent on them and even influenced by their output. Mozilla’s research found little in the way of protections to prevent potentially harmful or hurtful material being produced by the chatbots. Instead, it found terms and conditions that stated companies took no responsibility for what their chatbot might say or what users might do as a result. 

Other threats to privacy came from poor security. Most of the companies behind the chatbots reviewed by Mozilla had not published information about how they managed security vulnerabilities, or about how they use encryption to protect personal information. Nearly half allowed weak passwords. All but one chatbot company said that they may share or sell any of the personal data they glean from a user, and only half allowed personal data to be deleted.  

The Mozilla team concludes with some good advice for anyone contemplating using such romantic chatbots, including:

  • Do not give sensitive information
  • Do not give access to your photos, videos, camera, microphone or location
  • Refuse ad tracking, or at least limit it as much as possible
  • Choose a strong password
  • Request deletion of your data once you stop using an app

It’s clear that these first-generation romantic chatbots are a privacy disaster. The next generation romantic chatbots are unlikely to be much better in this respect unless action is taken against those who misuse the personal data they gather, if that becomes evident. One thing is for sure: given the rapid advances in generative AI technology, chatbots will become more human-like, more convincing – and therefore much more dangerous, especially for privacy.