Artificial Intelligence Can Determine Sexuality — And That’s Terrifying

Last week, researchers at Stanford University announced the stunning results of a study aimed at utilizing artificial intelligence, or AI, in a novel way.

The researchers tasked an AI with attempting to determine a subject’s sexuality based on nothing more than faces from photos posted to online dating profiles. Overwhelmingly, the AI was able to correctly identify whether a person was gay or straight —  81 percent of the time for men and 74 percent for women.

These results were quick to gain the attention of two LGBTQ advocacy groups, GLAAD and the Human Rights Campaign. Both admonished the researchers behind the study, calling it “junk science.” They also took issue with the study’s use of only white subjects, as well as their omission of transgender individuals.

While both organizations are right to express their concern about the study’s results, they have failed to understand its purpose.

Coauthor Michal Kosinski, explains that the experiment was performed in order to expose the potentially dangerous and unpredictable results that come with the use of AI and other learning programs. Kosinski says that activists’ rush to discredit his work is unfortunate because it draws attention away from the hazards he is trying to highlight.

Kosinski argues that “rejecting the results because you don’t agree with them” could lead to “harming the very people that you care about.”

And he’s right. Technologically, the world is at a crossroads. Though some might like to believe that a computer program is inherently apolitical, this is simply not the case.

Readers might recall that Microsoft implemented a chatbot on Twitter using AI learning programming. The hope was that through conversation with humans via social media, the AI chatbot would develop a personality and learn to participate in intelligent dialogue.

But the outcome was far from what Microsoft had in mind. In less than a day, the AI chatbot began spewing bizarre vitriol filled with a wide variety of bigotry. Microsoft swiftly removed the chatbot from Twitter as a result.

Clearly, AI learning programs behave unpredictably — and quite possibly dangerously. AIs that learn are susceptible to human influences, flaws and biases.

This Stanford experiment reinforces this truth and underscores the need for experts and possibly governments to work together to define the necessary limits for AI. Given that AI learning programs are already in use, how long will it be before they have serious, real world consequences?

Though learning AIs might not necessarily produce a Terminator-style outcome, they may be used, intentionally or unintentionally, to negatively impact people if left unchecked.

Photo Credit: Matheus Ferrero/Unsplash

49 comments

Jerome S
Jerome S10 days ago

thanks

SEND
Jerome S
Jerome S10 days ago

thanks

SEND
Jim V
Jim Ven10 days ago

thanks for sharing

SEND
Jim V
Jim Ven10 days ago

thanks for sharing

SEND
Stephanie s
Stephanie s2 months ago

Thank you

SEND
Stephanie s
Stephanie s2 months ago

Thank you

SEND
Michelle Spradley
Michelle Spradley2 months ago

Studies on artificial intelligence would hold more water if they weren't so dependent on the programming of natural biological intelligence.

SEND
Stephanie s
Stephanie s4 months ago

Thanks

SEND
Stephanie s
Stephanie s4 months ago

Thanks

SEND
Stephanie s
Stephanie s4 months ago

Thanks

SEND