Sexism and Racism in AI Models

Sexism and Racisam in AI

Sexism and Racisam in AI


Introduction

I wasn’t able to find anything interesting for today’s news. Yet I wanted to write something, so I’m sharing my thoughts with you about my observations and opinions about recent happenings with Image Recognition technology. Today’s post is related to sexism and racism in AI models.

In particular, I’m a little bit concerned about how the AI technology is used and what are the expectations from it, having in mind recent reactions in the Black Lives Matter movement.

It’s a fact that image recognition and similar technologies are biased. And not only image recognition, but all the modern AI technologies are race and gender-biased, among the other biases.

Can we fix this problem? I serriously doubt. But we can and we should be aware of these biases when we are using these technologies. Let me explain.

Racism in Image Recognition AI Models

First thing I noticed was dissatisfaction from AI models with human skin color bias:

https://www.washingtonpost.com/technology/2020/06/12/facial-recognition-ban/

The Black Lives Matter movement notched a win in Silicon Valley this week, turning police use of facial recognition technology into a litmus test for Big Tech’s support of civil rights.

Another recent interesting post:

https://www.designboom.com/technology/face-depixelizer-ai-pixelated-images-06-22-2020/

however, other users started noticing how this AI tool was not accurate when it came to processing black faces. for example, when processing a pixelated picture of Barack Obama, face depixelizer turned him into a white man. and even if users continued to import different pixelated pictures of Obama, the result was consistently wrong

and example of stated above:

https://twitter.com/Chicken3gg/status/1274314622447820801/photo/1?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1274314622447820801%7Ctwgr%5E&ref_url=https%3A%2F%2Fwww.designboom.com%2Ftechnology%2Fface-depixelizer-ai-pixelated-images-06-22-2020%2F
Barack Obama depixelized

AI model = AI Algorithm + Training Data.

AI algorithm is mathematical structure and math is not something that is biased. But training data is.

In the past, let’s say ten years ago, we trained data on limited data sources, just because hardware specifications allowed such. Training examples were counted in thousands or tens of thousands of observations. In such a “small” training dataset, it is easy to notice some kind of bias in the data, and appropriately deal with it. It is another question whether it was done or not. It was possible to do by humans if there was a will to do so.

Recent advances in GPUs (gamers, many thanks for this) made Deep Learning possible. Deep Learning produced impressive results, based on high processing speed and a huge volume of training data. This data is collected automatically from available online resources and labeled in a semi-supervised manner, with interventions by humans in some points of the process. Now, we are talking about millions of training examples (observations) collected and processed by machines.

Let’s talk about bias (racism) in image recognition and image processing AI technologies in general. Are they race-biased? Apparently yes. Why? Only because there are more images of white men and women available on the Internet or in the private data centers, than images of people with darker skin color. As a result, skin color bias appears as a logical consequence of this fact. Simply put, AI models are more successful in image processing of white people because of more images of white people available for training.

Some truths about the face depixelization technology:

the truth is that face depixelizer doesn’t magically depixelate a photo and reveal the actual person, but rather it can generate an alternative image where it finds a photo with a similar look and turns the pixelated image into a high-res, realistic one.

https://www.designboom.com/technology/face-depixelizer-ai-pixelated-images-06-22-2020/

the tool was not made to show what the pixelated image actually looks like, but actually just to find any face that fits. ‘this tool will not restore that original face,’ 

https://www.designboom.com/technology/face-depixelizer-ai-pixelated-images-06-22-2020/

As a result, if you train such a model with images of people only with black skin color, what you’ll get is a model that will produce depixelized image of a man/woman with black skin color, no matter what skin color had a person that was captured in low resolution.

Sexism in Language Models

In my opinion, a bigger problem is the one that we have with language models. These are class of AI models that figure out essential relationships in the word distributions and connections based on a massive volume of text data. When we talk about a massive amount, we are talking about gigabytes of text-only data. Wikipedia for example weights 17.5 GB of compressed data. But, no matter the volume of training data, in the very essence, “calculating” the model is pure statistics. The result of these statistics is that there is a distinction between “man” and “woman” tasks/jobs/work/interests, etc.

Is this sexist? Yes.

Is this something that the models learned? Yes.

Is this something that we can change? No.

We can’t modify and fix Wikipedia content to make it gender ignorant. One of the side-products of the language model generation process are word embeddings. They are n-dimensional vector representations of the words found in the vocabulary of the train text corpus data, and they capture correlations between two words. Distance between vectors for “man” and “soldier” is lower than the distance between the vectors for “woman” and “soldier”. Why? Only because “man” and “soldiers” appears more frequently close one to each other in the training data than it is a case for “woman” and “soldier”. Similarly, vectors for “woman”, “kindergarten” and “teacher” are closely grouped one to each other than those for “man”, “kindergarten”, and “teacher”. Does this fact make language models sexists?

Can we fix these models’ behavior? We have two possibilities:

  • Dealing with already generated vectors, and
  • Dealing with input training data

The first approach is proposed by Andrew Ng, one of the few of today’s living AI legends, in one of his courses on Coursera about Deep Learning. What he proposed, in the context of what is written above, is to manually fix vectors to make the distance between “man” and “soldier” very close to the distance between “woman” and “soldier”. That is maybe doable, but still, nobody implemented this yet. Therefore nobody is aware of the consequences of this process and how it will affect the precision of the modified language model on the tasks that are required by it. It may be less sexiest but less statistically correct as well. Therefore it will generate less correct results.

The other possibility is to manually change the root of the language models, i.e. text corpus that is used for training them.

Can we fix the whole of Wikipedia in order not to be sexiest? Maybe, with enough resources and human power available.

Can we fix the content of all the digitilized books that are also used as text corpus for training language models? How about Shakespear? Can we and should we consider fixing his writings?

What do you think about this?

No comment

Leave a Reply

Your email address will not be published. Required fields are marked *