Should AI be used to classify humans? An AI researcher at USC says it’s reductive and ethically dubious

  • AI robot algorithms have been known to use racist and sexist stereotypes when categorizing people.
  • Real-world applications could spell trouble for historically marginalized groups, researchers warn.
  • Kate Crawford, author of “Atlas of AI” discussed the ethics of categorizing humans using AI.

LoadingSomething is loading.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

The future is artificial intelligence — at least according to the tech industry. 

As the industry inches toward the prospect of Artificial General Intelligence — or AI that has the ability to learn and act like humans — one of the ways AI is being used is to classify people. Businesses are cashing in on the technology to use in everything from surveillance to social media.

However, experts say that as AI advances, many engineers and scientists are still overlooking the harm that these algorithms can inflict on marginalized communities.

Racist and sexist biases already exist in many of the algorithms on the market, experts previously told Insider, and not enough is being done to regulate the industry.

In an email interview with Insider, Kate Crawford, AI researcher at USC Annenberg and author of “Atlas of AI,” expanded on the risks of relying on AI to help classify humans.

Where did the idea of using AI to classify humans come from and why are companies interested in using this technology? 

There’s been a long history of using AI for classification tasks like emotion recognition, and automated detection of race, gender, sexuality, and even criminality that goes back decades. In my book “Atlas of AI,” I go into this history to trace the ways companies have used these techniques in contexts like policing and targeted advertising. But there is no scientific consensus that these approaches are valid at all. Classifying people in this way assumes that gender, race, and sexuality are biological and visible from the face. That, as we know, simply isn’t true. These approaches are grounded on the premise that biology is destiny, where our faces become our fate. I call this a ‘phrenological impulse’ – the desire to categorize people’s character based on appearance. 

Is there any safe or ethical way in which AI technology can be used to classify humans? 

Let’s consider what happens in gender classification. Reducing humans into binary gender categories and rendering transgender people invisible or “deviant” are common features of classification schemes in machine learning. Os Keyes’s paper on automatic gender detection in facial recognition shows that 95 percent of papers in the field treat gender as binary, with the majority describing gender as immutable and physiological. This goes against decades of research that shows that gender is cultural, relational, and historical. While some companies like Facebook responded by creating 50+ gender categories,  it fails to address the deeper harm of allocating people into gender or race categories without their consent in the first place. 

Machine learning systems are, in a very real way, constructing race and gender: they are defining the world within the terms they have set, and this has long-lasting ramifications for the people who are classified. When such systems are hailed as scientific innovations for predicting identities, this erases the technical and ethical problems of how the systems were built. 

So the question here is why do it at all? Separate from the serious ethical questions, classifying people’s gender, race, or sexuality without their consent or input is likely to have a significant error rate. Technical systems are making ideological interventions when they give simplistic labels for something as complex as personal identity, and they do so using a reductive set of possibilities of what it is to be human. That restricts the range of how people are represented and understood, which has serious impacts for all of us.

In your book, you talk about the questionable and sometimes unethical ways in which data is collected and used to train AI. Why is the AI industry so adamant about using data and data collection methods that have been proven to exacerbate inequality?

Machine learning models require ongoing flows of data to become more accurate. But machines are asymptotic, never reaching full precision, which propels the justification for more data extraction from as many people as possible. This has created a shift away from ideas like individual informed consent or ethical review of human subjects research to the creation of “data subjects”: agglomerations of data points without subjectivity or context or clearly defined rights. That kind of thinking has created a type of ethical distance and promoted a culture where engineers aren’t rewarded for thinking about the wider responsibilities of what they build. 

Do you think the AI industry is doing enough to combat these ethical issues? Is the potential for creating new technology getting in the way of implementing safety measures?

When AI systems are revealed to be producing discriminatory results along the categories of race, gender, or disability, then some companies commit to reforming their tools or diversifying their data. But the result is often a narrow technical response, usually an attempt to address technical errors and skewed data to make the AI system appear more fair. There’s much more to be done to address the underlying logic of using automated classification and the potential harm that this can inflict.

What is your sense of people’s understanding of these issues and how they can directly affect them? Should the general public pay more attention to the AI industry?

I’ve definitely seen an increase in public awareness surrounding the problems of bias and classification in AI, but what is most needed is regulatory action. Some government entities, such as the Council of the European Union, the UK Information Commissioner’s Office, and the Federal Trade Commission are taking strong steps toward regulation, but we need more efforts across many countries to ensure people are protected from these harms.

Read More