Posted on Sep 9, 2017 by Rick Falkvinge

A computer tells your government you’re 91% gay. Now what?

A fascinating and horrifying new AI algorithm is able to predict your sexual orientation with 91% accuracy from five photographs of your face. According to the researchers, the human brain isn’t wired to read this data from a face, but according to these results, it is there, and an AI can detect it. This raises a bigger issue: who will have access to AI in the future, and what will they use it for?




The article in The Guardian is fascinating and scary. It describes new research that is able to predict with 91% accuracy if a man is homosexual, based on just five photographs of the face. Similarly, it has a 83% precision in predicting homosexuality in women. This makes the AI leaps and bounds better than its human counterparts, who got the responses 61% and 54% correct, respectively — more or less a coin toss, useless as a measure. The researchers describe how the human brain apparently isn’t wired to detect signs that are clearly present in the face of an individual, signs that are demonstrably detectable.

Normally, this would just be a curiosity, akin to “computer is able to detect subject’s eye color using camera equipment”. But this particular detection has very special, and severe, repercussions. In too many countries, all of which we consider underdeveloped, this particular eye color — this sexual orientation — happens to be illegal. If you were born this way, you’re criminal. Yes, it’s ridiculous. They don’t care. The punishments go all the way up to the death penalty.

So what happens when a misanthropic ruler finds this AI, and decides to run it against the passport and driver license photo databases?

What happens when the bureaucracy in such a country decides you’re 91% gay, based on an unaccountable machine, regardless of what you think?

This highlights a much bigger problem with AIs than the AIs themselves, namely, what happens when despotic governments gets access to superintelligence. It was discussed briefly on Twitter the other day, in a completely different context:

“Too many worry what Artificial Intelligence — as some independent entity — will do to humankind. Too few worry what people in power will do with Artificial Intelligence.”

Now, having a 91% indicator is not enough to convict somebody in a court of law of this “crime” in a justice system meeting any kind of reasonable standard. But it doesn’t have to be a reasonable standard.

If you want an idea of what could happen, well within the realm of horrifying possibility, consider the McCarthyism era in the United States, where anybody remotely suspected of being a communist were shut out from society: denied jobs, denied housing, denied a social context.

What would have happened if a computer of the time, based on some similar inexplicable magic, decided that a small number of people were 91% likely to be communist?

They would not have gotten housing, they would not have gotten jobs, they would lose many if not all friends. All because of some machine determined them to possibly, maybe, maybe not, probably (according to the machine builders), be in a risk group of the time.

We need to start talking about what governments are allowed to do with data like this.

Sadly, the governments which need such a discussion the most, are also the governments will which allow and heed such a discussion the least.

Privacy really remains your own responsibility.

About Rick Falkvinge

Rick is Head of Privacy at Private Internet Access. He is also the founder of the first Pirate Party and is a political evangelist, traveling around Europe and the world to talk and write about ideas of a sensible information policy. Additionally, he has a tech entrepreneur background and loves good whisky and fast motorcycles.


VPN Service
  • Nathan

    A 91% accuracy does not equate to being “91% gay” nor “91% likely of being a communist”. It means that the algorithm labels each person as gay or not, and will get it wrong 9% of the time. That being said, fully agree that the risk of this kind of software being abused is extremely high. The 9% of people that could potentially be called out as gays by mistake and suffer the potential consequences from an abusive power is only the cherry on top.

    Also troubling is the idea that we could/would want to infer behavior simply by looking at a face. This is directly tackling the right of people to define themselves and is flirting dangerously with racism/sexism/other types of discrimination. An AI trained to detect criminals in a population would most likely pick up black people as most likely criminals, simply because they’re overrepresented in the carceral population. But this shouldn’t ever justify treating black people as “most likely criminals” by default.

    • x25mb

      91% is called “enough-so-i-can-justify-what-i’m-going-to-do-to-you”. it doesn’t even need to be that high… or true even… it almost seems as if the problem isn’t the AI (or the law) but the people behind it…

  • See also https://www.lightbluetouchpaper.org/2017/09/10/is-this-research-ethical/

    Not just unethical, but Ross Andreson writes “My students pretty well instantly called this out as selection bias”, so it may be total bullshit.

    • Falkvinge

      It may, but not necessarily because of selection bias — note how the first comment in that link debunks the debunking.

      However, regardless of the validity of the result per se, the questions raised by this remain important.

  • Joe Smith

    wrg, not, idts. no such thing as what tho, do/can do anyx nmw and it can all be perfx