Artificial Intelligence Can Determine Sexuality — And That’s Terrifying

Last week, researchers at Stanford University announced the stunning results of a study aimed at utilizing artificial intelligence, or AI, in a novel way.

The researchers tasked an AI with attempting to determine a subject’s sexuality based on nothing more than faces from photos posted to online dating profiles. Overwhelmingly, the AI was able to correctly identify whether a person was gay or straight —  81 percent of the time for men and 74 percent for women.

These results were quick to gain the attention of two LGBTQ advocacy groups, GLAAD and the Human Rights Campaign. Both admonished the researchers behind the study, calling it “junk science.” They also took issue with the study’s use of only white subjects, as well as their omission of transgender individuals.

While both organizations are right to express their concern about the study’s results, they have failed to understand its purpose.

Coauthor Michal Kosinski, explains that the experiment was performed in order to expose the potentially dangerous and unpredictable results that come with the use of AI and other learning programs. Kosinski says that activists’ rush to discredit his work is unfortunate because it draws attention away from the hazards he is trying to highlight.

Kosinski argues that “rejecting the results because you don’t agree with them” could lead to “harming the very people that you care about.”

And he’s right. Technologically, the world is at a crossroads. Though some might like to believe that a computer program is inherently apolitical, this is simply not the case.

Readers might recall that Microsoft implemented a chatbot on Twitter using AI learning programming. The hope was that through conversation with humans via social media, the AI chatbot would develop a personality and learn to participate in intelligent dialogue.

But the outcome was far from what Microsoft had in mind. In less than a day, the AI chatbot began spewing bizarre vitriol filled with a wide variety of bigotry. Microsoft swiftly removed the chatbot from Twitter as a result.

Clearly, AI learning programs behave unpredictably — and quite possibly dangerously. AIs that learn are susceptible to human influences, flaws and biases.

This Stanford experiment reinforces this truth and underscores the need for experts and possibly governments to work together to define the necessary limits for AI. Given that AI learning programs are already in use, how long will it be before they have serious, real world consequences?

Though learning AIs might not necessarily produce a Terminator-style outcome, they may be used, intentionally or unintentionally, to negatively impact people if left unchecked.

Photo Credit: Matheus Ferrero/Unsplash


Paulo R
Paulo R4 days ago

sounds very bizarre, ty

Jaime J
Jaime J7 days ago

Thank you!!

Cruel J
Cruel Justice9 days ago

Will Cyborgs be far behind? lol

Bill E
Bill Eagle9 days ago

Interesting and just a little bit creepy.

pam w
pam w10 days ago

Scary! Remember, Stephen Hawking said AI is the most dangerous idea ever....and, he's right.

Janet B
Janet B10 days ago


One Heart i
One Heart inc10 days ago


Aaron F
Aaron F10 days ago

Could we see the actual facts and figures of this "study"? Including the methodology and reported conclusions?

Norman P
Norman P11 days ago

Run the faces of Evangelicals thru it, the one's that oppose equality and gay marriage.

Margaret G
Margaret Goodman11 days ago

I have read that this Stanford study is not nearly as frightening as it sounds. The computer was given pairs of faces and asked to determine which face of the pair belonged to someone homosexual. Already the computer had a 50% chance of being correct. When the computer was given just faces, it did not do nearly as well, possibly at human level of achievement, or below in determining sexuality.