Reboot AI with human values

A security staff member wears augmented-reality glasses to measure people’s body temperatures in Hangzhou, China.Credit: Wang Gang/China News Service/Getty

In AI We Trust: Power, Illusion and Control of Predictive Algorithms Helga Nowotny Polity (2021)

In the 1980s, a plaque at NASA’s Johnson Space Center in Houston, Texas, declared: “In God we trust. All others must bring data.” Helga Nowotny’s latest book, In AI We Trust, is more than a play on the first phrase in this quote attributed to statistician W. Edwards Deming. It is most occupied with the second idea.

What happens, Nowotny asks, when we deploy artificial intelligence (AI) without interrogating its effectiveness, simply trusting that it ‘works’? What happens when we fail to take a data-driven approach to things that are themselves data driven? And what about when AI is shaped and influenced by human bias? Data can be inaccurate, of poor quality or missing. And technologies are, Nowotny reminds us, “intrinsically intertwined with conscious or unconscious bias since they reflect existing inequalities and discriminatory practices in society”.

Nowotny, a founding member and former president of the European Research Council, has a track record of trenchant thought on how society should handle innovation. Here, she offers a compelling analysis of the risks and challenges of the AI systems that pervade our lives. She makes a strong case for digital humanism: “Human values and perspectives ought to be the starting point” for the design of systems that “claim to serve humanity”.

The paradox is this. Data-driven technologies — from facial recognition to loan calculators — appeal to the desire for certainty, and the yearning to understand and predict. Witness the rapid take-up of algorithms in education, public services and marketing. Yet they shape and influence us in ways that can reduce our agency, power and control. In predicting our behaviour, AI systems can end up changing it. Exhibit A: advertising tech that, by collecting data about our likes and preferences, aims to predict what we want to buy and, in turn, shapes our choices way beyond consumerism.

Instead of ceding the future to what data scientist Cathy O’Neil dubbed “weapons of math destruction”, human wisdom needs to retake the helm, Nowotny argues. She spells out the consequences of an uncritical approach to AI, such as the implications of expanding surveillance without accountability or clarity about boundaries or purpose. To what end, she encourages us to ask, are data points (facial recognition, location data, genomics, biometrics) monitored and mapped? She contends that the construction of a “mirror world”, in which every person has a digital counterpart, affects our perception of self in relation to others. The AI-mediated life can fuel identity anxieties: “We are never quite sure whether we are looking at our true authentic self or a self fabricated.” Last month’s news reports of research showing the devastating impact of image-sharing sites on teenage girls’ mental health are a case in point.

An autonomous robot at work in an experimental shop at the Institute of Advanced Industrial Science and Technology in Tokyo.Credit: David Mareuil/Anadolu Agency/Getty

Storytelling about the potential of AI also comes in for scrutiny. Nowotny draws from work by historian Yuval Harari and economist Robert Shiller on the contagiousness of stories. She highlights the tenacity of the narrative that technology always benefits everyone, even though this is not aligned with lived experience. “If half of working class men in the US today earn less than their fathers did at the same age, what does progress mean to them?” she asks. And she examines how we conceptualize data itself. It should not be thought of as a commodity, to be enclosed or fenced off within the paradigm of property rights, she explains; rather, it is a social good.

Wisdom, in this vision, should be more than a simple technical solution. For instance, ‘explainability’ — ensuring that AI makes decisions in a transparent way, rather than in a ‘black box’ — is often proposed as a cure-all. But that leads to new tensions because of a fundamental misalignment, as economist Diane Coyle and computer scientist Adrian Weller have explored. The deterministic nature of an algorithm means that its designers have to make explicit what values and political choices it is going to serve. But the ambiguities of human-led policymaking are informed by negotiation of trade-offs. These are implicit choices that are not easy to force into the light. Consider the furore about baked-in bias last year, when, during the pandemic, an algorithm was used to predict UK school pupils’ exam results on the basis of past data.

This work is a fascinating and timely meditation. Nowotny makes connections across economics, philosophy, law, science and technology studies, history and sociology to engage with the potential and pitfalls of AI and data-driven technologies. She throws out provocative questions and does not become too prescriptive — the mark of a good book.

Competing Interests

The author declares no competing interests.

Read More

Reema Patel