Why Does Your Computer Need A Gaydar?

Fun fact: I don’t care how accurate your algorithm is if its only clear purpose is oppression.  I don’t care how “compelling” your data is if all it serves to do is perpetuate a damaging norm.  Data is built around norms, folks.  This isn’t news.  I don’t care what “science” you think you’re contributing to or how valuable you think your inquiry is for the scientific community if the only foreseeable use for your results is endangering a group of people.

I bring this up because I recently read about a study done at Stanford that created an artificial intelligence system that claims to be able to identify gay and lesbian individuals based on their facial features.  My initial reaction after reading this was to ask what would possibly be a useful purpose for creating this? Who on earth is benefiting from being able to have immediate visual identification of (a notably very specific type of) queerness?  What purpose would this information serve, and who would it be serving? As is so historically true of science and the world in general, it would not be serving queer people.  This study is rife with naive assumptions about the neutrality and rationality of a study labeled scientific and problems, the first of which being that it will do nothing to serve the group of people which it seeks to study.

This is an issue of privacy.  If this AI becomes widely used, or even privately used, it can certainly be used to out queer folks.  Not only is this sometimes dangerous or detrimental to people’s livelihoods, it’s also just their right to have their sexuality be private information if they don’t wish to share it.  Making explicit your sexuality is something only queer people are pressured to do, as heterosexuality is constructed as the norm in our society, meaning that everyone is presumed heterosexual until proven otherwise.  So, societal norms dictate that it is unlikely that someone would ever create a system that would identify straight people, and thus they never feel like they have to hide.  Queer folks, because they have to come out for people to recognize their sexuality, often feel pressure to either hide or be out, both of which require large amounts of emotional labor.  When coming out there is always the question of how the people around you will react, and whether this will negatively affect the way they perceive you, which is never an issue for cishet people.  And the process of coming out is never done: it’s not a one-time process and then you’re done for good, rather, every time it comes up with someone new you have to go through the whole (rather laborious) process all over again.  But importantly, queer folks have a choice (unless they are unwillingly outed by someone, or some machine) to come out, and are hopefully rarely forced into revealing this information, particularly in a potentially dangerous context.  This AI forces that information out, potentially endangering queer folks and certainly taking away their right to privacy which, for a cishet person, would never be called into question.

Furthermore, the machine is far from perfect.  If we briefly operate under the assumption that this could potentially have some non-dangerous use, even then it still misidentifies sexuality at a relatively high rate.  And this makes sense!  Sexuality is a nuanced and fluid thing, and really is something that cannot and should not be placed in discrete boxes in the way this system wishes to.  An additional erroneous and reductive aspect of this is that the AI was only trained on the images of people who were perceived to be either gay or lesbian and who were white.  By reducing queerness to existing within either gayness or lesbianism and within that specifically only in white people, you eliminate a whole host of very real and very (already) invisibilized identities.  Like I just said, sexuality is fluid, but humans love labels.  And labels can certainly be useful!  But particularly when we take one or two labels to define huge categories of people, we end up erasing identities.  And by only including photos of white people in this study, the researches have further reduced queerness to something that only exists in white folks, which is just wrong.  Artificial intelligence creation, and technology in general, has a long and nasty history of leaving out people of color, and it’s incredibly negligent to make the claim that you have a system that can identify gay and lesbian folks when this system is not trained on anyone who is not white.

Similarly, who has labels applied to them is a function of systematic power.  People who are labeled as “neutral”, or people not labeled “other”, never feel the need to have a label applied to them and thus are the people that feel as though they have the right to apply labels to others.  For example, cishet people don’t feel as though they need to invoke a label when identifying themselves because their identity is conceptualized as obvious and the norm.  Because their identity is the norm, they have structural power over non-normalized identities and, as a result, are granted societal power to determine how marginalized groups are labeled and what these labels mean.  As a result, coming out is an act that places a lot of emotional labor on queer folks and (while it can be a liberating thing for queer folks because it gives visibility, among other things) is largely something that was constructed by cishet people for the benefit of cishet people.  Think about it: the closet isn’t somewhere that queer folks want to be, and they obviously didn’t place themselves there.  The closet is something constructed because of heteronormativity, which is the societal norm that causes people to assume that everyone is heterosexual until proven otherwise, as I said before (for an excellent article with more information on this, see here).

I feel like a broken record saying the tech industry needs to do better, but the tech industry really needs to do better, and especially do better by marginalized groups.  It is dangerous to keep thinking about technology as something that can somehow have a human element removed, and thus have human bias removed.  Technology is made by humans and is by no means free from bias.  In addition, we have to think more critically about who science is being done on, and who that is for.  What I mean by that is that science has a long history of poking and prodding at marginalized groups such as people of color, women, disabled folks, and queer folks, but science sometimes forgets to ask these groups how they actually need to be benefited.  This AI is a great example of that.  It studies queer bodies without ever being concerned about the potential harm it can do, or what would actually be beneficial to queer folks.  I firmly believe that science is an important and valuable way of understanding our world, but it’s dangerous to forget to bring the voices of the people science is being done on to the table. That’s how we end up with computers with purported “gaydar”, which is a concept rejected by lots of queer people as highly problematic because it says that queerness has a single set aesthetic and erases folks that don’t ascribe to this.

So, how do we do better?  We stop to think about why we feel the need to imbue technology with human qualities, and why we choose the human qualities that we do.  We think critically about who gets to create this technology and which voices should really be at the table.  And we stop creating technology that has a sole clear purpose of harm.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s