UK data watchdog investigates whether AI systems show racial bias

ICO will investigate the use of algorithms – small computer programmes – to sift through job applications, amid concerns that it they are affecting employment opportunities for people from ethnic minorities. Photograph: dima_sidelnikov/Getty Images/iStockphoto

Artificial intelligence (AI)

ICO says AI-driven discrimination can lead to job rejections or being wrongfully denied bank loans or benefit

Thu 14 Jul 2022 06.01 BST

The UK data watchdog is to investigate whether artificial intelligence systems are showing racial bias when dealing with job applications.

The Information Commissioner’s Office said AI-driven discrimination could have “damaging consequences for people’s lives” and lead to someone being rejected for a job or being wrongfully denied a bank loan or a welfare benefit.

It will investigate the use of algorithms to sift through job applications, amid concerns that they are affecting employment opportunities for people from ethnic minorities.

“We will be investigating concerns over the use of algorithms to sift recruitment applications, which could be negatively impacting employment opportunities of those from diverse backgrounds,” said the ICO.

The investigation is being announced as part of a three-year plan for the ICO under the UK’s new information commissioner, John Edwards, who joined the ICO in January after running its New Zealand counterpart.

In a speech on Thursday Edwards is expected to say that the ICO will be “looking at the impact AI use could be having on groups of people who aren’t part of the testing for this software, such as neurodiverse people or people from ethnic minorities”.

The CEO of ZipRecruiter, a jobs website, told the Guardian this year at least three-quarters of all CVs submitted for jobs in the US are read by algorithms. A survey of recruiting executives conducted by the research and consulting firm Gartner last year found that almost all reported using AI for part of the recruiting and hiring process – for instance sifting applications before they are seen by a person.

Under the UK General Data Protection Regulation, which is enforced by the ICO, people have a right to non-discrimination under the processing of their data. The ICO has warned in the past that AI-driven systems could produce outcomes that disadvantage certain groups if they are not represented accurately or fairly in the data set that the algorithm is trained and tested on. The UK Equality Act 2010, also offers individuals protection from discrimination, whether caused by a human or an automated decision-making system.

Dr David Leslie, director of ethics and responsible innovation research at The Alan Turing Institute, said: “The use of data-driven AI models in recruitment processes raises a host of thorny ethical issues, which demand forethought and diligent assessment on the part of both system designers and procurers.

“Most basically, predictive models that could be used to filter job applications through techniques of supervised machine learning run the risk of replicating, or even augmenting, patterns of discrimination and structural inequities that could be baked into the datasets used to train them.”

Elsewhere in its three-year plan, the UK watchdog will look at whether to prioritise the public interest when considering complaints about freedom of information requests. The ICO said changes to the complaints process for the FoI regime – which grants members of the public the legal right to request official information from public bodies – are necessary due to rising complaints and fewer resources to deal with them.

Edwards said the current system for dealing with FoI complaints – for reasons that include late responses or incomplete information disclosures – was not working and it would consult on a new regime. The number of complaints reached 6,361 in the year to 30 April 2022, which was a slight dip on the prior year but compares with 5,433 in 2016/17.

“I want to explore whether we should be able to elevate some cases or pull some out and say: this goes on a fast track, because it’s in the public interest,” said Edwards.

Edwards said the organisation expected to avoid accusations of bias in what it prioritised by publishing a list of criteria for sifting through complaints.

Edwards said the ICO would consider prioritising submissions from journalists and MPs, although it would also weigh a “blind applicant” system where the identity of the person or organisation making the submission was concealed. Information held by Scottish public authorities is covered by the country’s own commissioner and Freedom of Information Act.

“Should we differentiate between who’s making the request … and say a journalist is acting on behalf of the wider public to illuminate a particular issue. So therefore, they get extra points. A member of parliament, for example, has a very particular role in holding the executive and public authorities to account so maybe there’s a case for adding extra weight to those [requests].”

Read More

Dan Milmo Global technology editor