Six researchers who are shaping the future of artificial intelligence

As artificial intelligence (AI) becomes ubiquitous in fields such as medicine, education and security, there are significant ethical and technical challenges to overcome.

CYNTHIA BREAZEAL: Personal touch

Credit: Taj Francis

While the credits to Star Wars drew to a close in a 1970s cinema, 10-year-old Cynthia Breazeal remained fixated on C-3PO, the anxious robot. “Typically, when you saw robots in science fiction, they were mindless, but in Star Wars they had rich personalities and could form friendships,” says Breazeal, associate director of the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, Massachusetts. “I assumed these robots would never exist in my lifetime.”

A pioneer of social robotics and human–robot interaction, Breazeal has made a career of conceptualizing and building robots with personality. As a master’s student at MIT’s Humanoid Robotics Group, she created her first robot, an insectile machine named Hannibal that was designed for autonomous planetary exploration and funded by NASA.

Some of the best-known robots Breazeal developed as a young researcher include Kismet, one of the first robots that could demonstrate social and emotional interactions with humans; Cog, a humanoid robot that could track faces and grasp objects; and Leonardo, described by the Institute of Electrical and Electronics Engineers in New Jersey as “one of the most sophisticated social robots ever built”.

In 2014, Breazeal founded Jibo, a Boston-based company that launched her first consumer product, a household robot companion, also called Jibo. The company raised more than US$70 million and sold more than 6,000 units. In May 2020, NTT Disruption, a subsidiary of London-based telecommunications company, NTT, bought the Jibo technology, and plans to explore the robot’s applications in health care and education.

Breazeal returned to academia full time this year as director of the MIT Personal Robots Group. She is investigating whether robots such as Jibo can help to improve students’ mental health and wellbeing by providing companionship. In a preprint published in July, which has yet to be peer-reviewed, Breazeal’s team reports that daily interactions with Jibo significantly improved the mood of university students (S. Jeong et al. Preprint at https://arxiv.org/abs/2009.03829; 2020). “It’s about finding ways to use robots to help support people,” she says.

In April 2020, Breazeal launched AI Education, a free online resource that teaches children how to design and use AI responsibly. “Our hope is to turn the hundreds of students we’ve started with into tens of thousands in a couple of years,” says Breazeal. — by Benjamin Plackett

CHEN HAO: Big picture

Credit: Taj Francis

Analysing medical images is an intensive and technical task, and there is a shortage of pathologists and radiologists to meet demands. In a 2018 survey by the UK’s Royal College of Pathologists, just 3% of the National Health Service histopathology departments (which study diseases in tissues) said they had enough staff. A June 2020 report published by the Association of American Medical Colleges found that the United States’ shortage of physician specialists could climb to nearly 42,000 by 2033.

AI systems that can automate part of the process of medical imaging analysis could be the key to easing the burden on specialists. They can reduce tasks that usually take hours or days to seconds, says Chen Hao, founder of Imsight, an AI medical imaging start-up based in Shenzhen, China.

Launched in 2017, Imsight’s products include Lung-Sight, which can automatically detect and locate signs of disease in CT scans, and Breast-Sight, which identifies and measures the metastatic area in a tissue sample. “The analysis allows doctors to make a quick decision based on all of the information available,” says Chen.

Since the outbreak of COVID-19, two of Shenzhen’s largest hospitals have been using Imsight’s imaging technology to analyse subtle changes in patients’ lungs caused by treatment, which enables doctors to identify cases with severe side effects.

In 2019, Chen received the Young Scientist Impact Award from the Medical Image Computing and Computer-Assisted Intervention Society, a non-profit organization in Rochester, Minnesota. The award recognized a paper he led that proposed using a neural network to process fetal ultrasound images (H. Chen et al. in Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015 (eds N. Navab et al.) 507–514; Springer, 2015). The technique, which has since been adopted in clinical practice in China, reduces the workload of the sonographer.

Despite the rapid advancement of AI’s role in health care, Chen rejects the idea that doctors can be easily replaced. “AI will not replace doctors,” he says. “But doctors who are better able to utilize AI will replace doctors who cannot.” — by Hepeng Jia

ANNA SCAIFE: Star sifting

Credit: Taj Francis

When construction of the Square Kilometre Array (SKA) is complete, it will be the world’s largest radio telescope. With roughly 200 radio dishes in South Africa and 130,000 antennas in Australia expected to be installed by the 2030s, it will produce an enormous amount of raw data, more than current systems can efficiently transmit and process.

Anna Scaife, professor of radio astronomy at the University of Manchester, UK, is building an AI system to automate radio astronomy data processing. Her aim is to reduce manual identification, classification and cataloguing of signals from astronomical objects such as radio galaxies, active galaxies that emit more light at radio wavelengths than at visible wavelengths.

In 2019, Scaife was the recipient of the Jackson-Gwilt Medal, one of the highest honours bestowed by the UK Royal Astronomical Society (RAS). The RAS recognized a study led by Scaife, which outlined data calibration models for Europe’s Low Frequency Array (LOFAR) telescope, the largest radio telescope operating at the lowest frequencies that can be observed from Earth (A. M. M. Scaife and G. H. Heald Mon. Not. R. Astron. Soc. 423, L30–L34; 2012). The techniques in Scaife’s paper underpin most low-frequency radio observations today.

“It’s a very peculiar feeling to win an RAS medal,” says Scaife. “It’s a mixture of excitement and disbelief, especially because you don’t even know that you were being considered, so you don’t have any opportunity to prepare yourself. Suddenly, your name is on a list that commemorates more than 100 years of astronomy history, and you’ve just got to deal with that.”

Scaife is the academic co-director of Policy@Manchester, the University of Manchester’s policy engagement institute, where she helps researchers to better communicate their findings to policymakers. She also runs a data science training network that involves South African and UK partner universities, with the aim to build a team of researchers to work with the SKA once it comes online. “I hope that the training programmes I have developed can equip young people with skills for the data science sector,” says Scaife. — by Andy Tay

TIMNIT GEBRU: Algorithmic bias

Credit: Taj Francis

Computer vision is one of the most rapidly developing areas of AI. Algorithms trained to read and interpret images are the foundation of technologies such as self-driving cars, surveillance and augmented reality.

Timnit Gebru, a computer scientist and former co-lead of the Ethical AI Team at Google in Mountain View, California, recognizes the promise of such advances, but is concerned about how they could affect underrepresented communities, particularly people of colour. “My research is about trying to minimize and mitigate the negative impacts of AI,” she says.

In a 2018 study, Gebru and Joy Buolamwini, a computer scientist at the MIT Media Lab, concluded that three commonly used facial analysis algorithms drew overwhelmingly on data obtained from light-skinned people (J. Buolamwini and T. Gebru. Proc. Mach. Learn. Res. 81, 77–91; 2018). Error rates for dark-skinned females were found to be as high as 34.7%, due to a lack of data, whereas the maximum error rate for light-skinned males was 0.8%. This could result in people with darker skin getting inaccurate medical diagnoses, says Gebru. “If you’re using this technology to detect melanoma from skin photos, for example, then a lot of dark-skinned people could be misdiagnosed.”

Facial recognition used for government surveillance, such as during the Hong Kong protests in 2019, is also highly problematic, says Gebru, because the technology is more likely to misidentify a person with darker skin. “I’m working to have face surveillance banned,” she says. “Even if dark-skinned people were accurately identified, it’s the most marginalized groups that are most subject to surveillance.”

In 2017, as a PhD student at Stanford University in California under the supervision of Li Fei-Fei, Gebru co-founded the non-profit, Black in AI, with Rediet Abebe, a computer scientist at Cornell University in Ithaca, New York. The organization seeks to increase the presence of Black people in AI research by providing mentorship for researchers as they apply to graduate programmes, navigate graduate school, and enter and progress through the postgraduate job market. The organization is also advocating for structural changes within institutions to address bias in hiring and promotion decisions. Its annual workshop calls for papers with at least one Black researcher as the main author or co-author. — by Benjamin Plackett

YUTAKA MATSUO: Internet miner

Credit: Taj Francis

In 2010, Yutaka Matsuo created an algorithm that could detect the first signs of earthquakes by monitoring Twitter for mentions of tremors. His system not only detected 96% of the earthquakes that were registered by the Japan Meteorological Agency (JMA), it also sent e-mail alerts to registered users much faster than announcements could be broadcast by the JMA.

He applied a similar web-mining technique to the stock market. “We were able to classify news articles about companies as either positive or negative,” says Matsuo. “We combined that data to accurately predict profit growth and performance.”

Matsuo’s ability to extract valuable information from what people are saying online has contributed to his reputation as one of Japan’s leading AI researchers. He is a professor at the University of Tokyo’s Department of Technology Management and president of the Japan Deep Learning Association, a non-profit organization that fosters AI researchers and engineers by offering training and certification exams. In 2019, he was the first AI specialist added to the board of Japanese technology giant Softbank.

Over the past decade, Matsuo and his team have been supporting young entrepreneurs in launching internationally successful AI start-ups. “We want to create an ecosystem like Silicon Valley, which Japan just doesn’t have,” he says.

Among the start-ups supported by Matsuo is Neural Pocket, launched in 2018 by Roi Shigematsu, a University of Tokyo graduate. The company analyses photos and videos to provide insights into consumer behaviour.

Matsuo is also an adviser for ReadyFor, one of Japan’s earliest crowd-funding platforms. The company was launched in 2011 by Haruka Mera, who first collaborated with Matsuo as an undergraduate student at Keio University in Tokyo. The platform is raising funds for people affected by the COVID-19 pandemic, and reports that its total transaction value for donations rose by 4,400% between March and April 2020.

Matsuo encourages young researchers who are interested in launching AI start-ups to seek partnerships with industry. “Japanese society is quite conservative,” he says. “If you’re older, you’re more likely to get a large budget from public funds, but I’m 45, and that’s still considered too young.” — by Benjamin Plackett

DACHENG TAO: Machine visionary

Credit: Taj Francis

By 2030, an estimated one in ten cars globally will be self-driving. The key to getting these autonomous vehicles on the road is designing computer-vision systems that can identify obstacles to avoid accidents at least as effectively as a human driver.

Neural networks, sets of AI algorithms inspired by neurological processes that fire in the human cerebral cortex, form the ‘brains’ of self-driving cars. Dacheng Tao, a computer scientist at the University of Sydney, Australia, designs neural networks for computer-vision tasks. He is also building models and algorithms that can process videos captured by moving cameras, such as those in self-driving cars.

“Neural networks are very useful for modelling the world,” says Tao, director of the UBTECH Sydney Artificial Intelligence Centre, a partnership between the University of Sydney and global robotics company UBTECH.

In 2017, Tao was awarded an Australian Laureate Fellowship for a five-year project that uses deep-learning techniques to improve moving-camera computer vision in autonomous machines and vehicles. A subset of machine learning, deep learning uses neural networks to build systems that can ‘learn’ through their own data processing.

Since launching in 2018, Tao’s project has resulted in more than 40 journal publications and conference papers. He is among the most prolific researchers in AI research output from 2015 to 2019, as tracked by the Dimensions database, and is one of Australia’s most highly cited computer scientists. Since 2015, Tao’s papers have amassed more than 42,500 citations, as indexed by Google Scholar. In November 2020, he won the Eureka Prize for Excellence in Data Science, awarded by the Australian Museum.

In 2019, Tao and his team trained a neural network to construct 3D environments using a motion-blurred image, such as would be captured by a moving car. Details, including the motion, blurring effect and depth at which it was taken, helped the researchers to recover what they describe as “the 3D world hidden under the blurs”. The findings could help self-driving cars to better process their surroundings. — by Gemma Conroy

Read More