The top 12 people in artificial-intelligence policy, ethics, and research

Timnit Gebru, a researcher who established the Distributed AI Research Institute.


Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

AI presents an ethical minefield that’s pushed researchers, tech companies, and policymakers into opposing camps.

Our list features people who have divergent views on how AI should be developed, from Shane Legg, a cofounder of Google DeepMind who’s working to develop artificial general intelligence, to the researchers Emily M. Bender and Timnit Gebru, the coauthors of the watershed paper “On the Dangers of Stochastic Parrots.”

“The problem is that algorithms aren’t the kind of thing that can have accountability — only people can do that,” Bender told Business Insider via email. “And ‘intelligence’ is not a good cover term for those concepts, nor a neutral way to refer to this tech.”

Insider identified the top 100 people who make AI intelligent. Here are our picks for policy, ethics, and research.

Ali Alkhatib

Ali Alkhatib, an AI researcher whose work focuses on how society interacts with algorithms.

Ali Alkhatib

Alkhatib, who’s trained as a cultural anthropologist, served as the interim director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on how society interacts with algorithms and how AI can harm marginalized groups who may be underrepresented in the data used to train AI models. “AIs cause so much harm because [AI models] exhort us to live in their utopia,” Alkhatib wrote in a 2021 research paper.

Emily Bender

Emily Bender, a professor of linguistics at the University of Washington.

University of Washington

Bender, a linguist who teaches at the University of Washinton, has significantly contributed to our understanding of how large language models like ChatGPT process language and their limitations in processing non-English languages. Bender has urged companies and policymakers to focus on the potential dangers of AI – harms such as the possibility of human-sounding AI manipulating users – and possible solutions.

Jack Clark

Jack Clark, a cofounder of the AI startup Anthropic.


Clark is a co-founder of AI startup Anthropic, which has raised over $5 billion, including a $4 billion check from Amazon in September. Before starting Anthropic, he was the policy director at OpenAI. Clark said he’s read thousands of AI research papers and uses his knowledge to discuss the future of AI with world leaders and other policymakers. He also writes a popular newsletter called Import AI. “The greatest challenge of the 21st century is to make an increasingly fast-moving technical world ‘legible’ to a large number of people,” he wrote in his Import AI bio. “Things will be weird. Be not afraid.”

Inioluwa Deborah Raji

Inioluwa Deborah Raji, an AI researcher at Mozilla.


Raji is a fellow and AI researcher at the nonprofit Mozilla, which advocates for “ethical tech and trustworthy AI.” She uses algorithmic audits to challenge corporate claims about AI’s performance and bias and researches ways to keep companies accountable and limit the negative effects of their machine-learning products. Raji contributed to the “Actionable Auditing” paper, which documented the impact of disclosing bias in AI systems, and the “Gender Shades” paper, which showed that facial recognition technology was less accurate in identifying women and people of color. She’s also the founder of Project Include, an educational nonprofit that gives students from underprivileged backgrounds training in STEM fields.

Timnit Gebru

Timnit Gebru, a researcher who established the Distributed AI Research Institute.


Gebru and other researchers sounded the alarm on the dangers of Google’s natural-language-processing tools and discrimination at the company in 2020; she was fired by the company shortly afterward. In 2021, the Ethiopian-born data scientist established the Distributed AI Research Institute, which aims to limit harms associated with AI, including reinforcing racism and other forms of discrimination and instead center diverse perspectives in its development and deployment. Gebru also coauthored the “On the Dangers of Stochastic Parrots” paper, which compared large language models to the birds’ ability to ingest data and parrot responses.

Geoffrey Hinton

Geoffrey Hinton, a cognitive psychologist and computer scientist.


Hinton has been called “the Godfather of AI” for his foundational work on the technology behind large language models like ChatGPT and for his research on deep learning at Google Brain. Ilya Sutskever, who became a cofounder of OpenAI, was part of Hinton’s groundbreaking research team at the University of Toronto. Hinton sent shock waves through the industry when he resigned from Google earlier this year and publicly painted a doomsday picture of the existential threat the technology represents. Hinton’s about-face on AI demonstrates the predicament the technology poses: Does its ability to solve problems outweigh the potential dangers it presents?

Shane Legg

Shane Legg, a cofounder of Google DeepMind.

Google DeepMind

Legg is among those working to create artificial general intelligence — or AI that can do human cognitive tasks — a reality. Legg is a co-founder of the AI research lab DeepMind, which has since merged with Google Brain to form Google DeepMind. Researchers had been studying AGI since the late 1990s, but Legg’s 2007 paper on universal intelligence put the concept on the map. While Legg fends off competitors like OpenAI at Deep Mind, he has spoken openly about AI’s risks and threats to humanity. Before getting his Ph.D. in math and artificial intelligence, Legg briefly studied finance in Switzerland.

Margaret Mitchell

Margaret Mitchell, a researcher and the chief ethics scientist at Hugging Face.

Margaret Mitchell

Since 2021, Mitchell has worked as a researcher and the chief ethics scientist at Hugging Face, a machine-learning model-sharing platform for developers. Her time at Google as the co-lead of its AI ethics team famously ended after she coauthored the “On Stochastic Parrots” research paper, which criticized Big Tech’s size-over-safety approach in creating large language models. Mitchell had called out biased and exclusionary AI culture even before she joined Google. She left her role as a researcher at Microsoft when she felt her concerns about the limitations of AI models weren’t taken seriously. Some of her most notable research focuses on ways to make machine-learning models more transparent and accountable.

Safiya Noble

Safiya Noble, a professor and cofounder of the UCLA Center for Critical Internet Inquiry.

Safiya Noble

Noble, an expert on algorithmic discrimination and AI ethics, studies how race, class, and societal structures interact with the internet and other technologies. She co-founded UCLA’s Center for Critical Internet Inquiry nearly a decade ago and has taught information studies and African-American studies at the university. She’s a prolific writer and researcher. She authored a study about how Google’s search engine displays bias against Black women and how racism online influences racism in the world. Noble is the recipient of several awards, including a 2021 MacArthur Fellowship.

Ranjit Singh

Ranjit Singh, a senior researcher at Data & Society.

Ranjit Singh

Singh is a senior researcher at Data & Society, a nonprofit founded in 2013 that studies the social impacts of data-centric technologies. Singh’s research examines how data shapes our lives and aims to establish ethical research practices. Data & Society launched its Algorithmic Impact Methods Lab in May to research methods to assess AI systems for the public good, which are collective goods, like education and infrastructure meant to benefit society as a whole. Singh also has a keen interest in the global majority, of non-white people who make up most of the world’s population and wrote his dissertation on India’s biometrics-based national identification infrastructure, examining issues with the program and what it means to be a citizen in a digital age. He is also a fellow with the Siegel Family Endowment, a technology-grantmaking

Camille Stewart Gloster

Camille Stewart Gloster, the deputy national cyber director for the White House.

Camille Stewart Gloster

Stewart Gloster, who received her law degree from American University, has worked at the Department of Homeland Security and Google. As the deputy national cyber director for the White House, she advises President Joe Biden and federal agencies on AI issues, cybersecurity workforce education, and plans for national technology security. In a recent interview, she said she will focus on creating a “highly skilled and diverse” workforce of cyber pros. Before her role in the White House, Stewart Gloster co-led a campaign to raise awareness about issues stemming from systemic racism and sexism faced by women and people of color in cybersecurity professions through #ShareTheMicInCyber.

Meredith Whittaker

Meredith Whittaker, the president of the Signal Foundation.

AI Now Institute

Whittaker is the president of the Signal Foundation — the nonprofit organization whose subsidiary developed the encrypted messaging app Signal. Before joining Signal, Whittaker spent over a decade at Google, where she founded its Open Research group. In 2017, she cofounded the AI Now Institute, a policy-research organization that studies the implications of AI on society and aims to address the power of the tech industry. She has spoken out against Google’s insufficient response to the harms its AI presented and was central in multiple protests — including as an organizer for the 2018 Google walkouts — before she quit the company in 2019. She continues to speak out against the “surveillance capitalism” at the heart of Big Tech companies’ business models.

Artificial Intelligence

Read More

Monica Melton