The Promise and Challenges of Artificial Intelligence

Artificial intelligence (AI), which enables machines to perform advanced, humanlike functions, promises breakthroughs across society—in health care, transportation, education, finance, and beyond. At their best, AI tools perform tasks at a much greater speed, scale, or degree of accuracy than humans—freeing up time and resources for us to solve problems that machines cannot. Chatbots can provide support around the clock; crawlers can scour websites and databases for information; self-driving cars hold the potential to make commutes safer and more efficient.

But the technology is not without its perils. One striking example happened in 2019, when researchers found that a predictive algorithm used by UnitedHealth Group was biased against Black patients. In using health care spending as a proxy for illness, the tool inadvertently perpetuated systemic inequities that have historically kept Black patients from receiving adequate care (Obermeyer, Z., et al., Science, Vol. 366, No. 6464, 2019).

“Algorithms are created by people who have their own values, morals, assumptions, and explicit and implicit biases about the world, and those biases can influence the way AI models function,” said Nicol Turner-Lee, PhD, a sociologist and director of the Center for Technology Innovation at the Brookings Institution in Washington, D.C. Because of these ongoing concerns about equity, privacy, and trust, there’s a growing recognition among researchers and industry experts that responsible innovation requires a sophisticated understanding of human behavior. To that end, psychologists are helping develop and deploy AI software and technologies, including everything from therapeutic chatbots to facial-recognition systems. They’re also amassing a robust literature on human-computer interaction, digital therapeutics, and the ethics of automation.

“As we are developing these emerging technologies, we have to ask ourselves: How will societies interact with them?” said psychologist Arathi Sethumadhavan, PhD, principal research manager on Microsoft’s ethics and society team. “That’s where psychologists come into play, because we are very good at understanding people’s behaviors, motivations, and perceptual and cognitive capabilities and limitations.”

From model to market

Building the algorithms that fuel AI technologies may sound like the sole domain of computer scientists, but psychologists who study intelligence in humans are also helping unlock ways to enhance intelligence in machines.

For example, AI systems often struggle to make informed guesses about things they haven’t seen before—something that even young children can do well. In a series of studies comparing the way children and machines learn, Alison Gopnik, PhD, a professor of psychology and affiliate professor of philosophy at the University of California, Berkeley, and her colleagues have found that kids surpass AI systems in several areas, including exploratory learning, social learning, and building mental models (Scientific American, June 2017).

She is now working with computer scientists Pulkit Agrawal, PhD, of the Massachusetts Institute of Technology, and Deepak Pathak, PhD, of Carnegie Mellon University, to adapt AI technologies in light of those findings. Among other things, Gopnik’s team is looking at how humans can make machines more playful and curious about the world around them.

Pathak and Agrawal have programmed an agent to investigate and model unknown parts of virtual environments; using this technique, it can perfectly master a Mario Brothers game. But one persistent problem is that machines have trouble distinguishing random, unpredictable noise—such as a square of static—from interesting but surprising new events. Children, on the other hand, excel at separating relevant new information from irrelevant noise.

“That’s the big challenge now,” Gopnik said. “Can we figure out how to make AI not just curious but curious about the right kinds of things?”

These algorithms eventually evolve into products that people use, opening up a host of new promises and perils, which psychologists are also exploring. At Microsoft, Sethumadhavan conducts qualitative and quantitative research to understand how people perceive AI technologies, then she incorporates those insights into product development.

For example, participants in a recent study of facial-recognition technology perceived advantages to the technology for building access and airport screening—because of clear safety and efficiency gains—but were less bullish on its use for employee monitoring or for providing personalized assistance in retail environments.

“Human beings, when given the time, are always doing a value exchange, weighing the benefits to them and what they are giving up in return,” Sethumadhavan said, adding that the findings can help developers to consider the contexts of use prior to deploying emerging AI technologies and to build the appropriate level of trust with users.

In addition to studying end users, Sethumadhavan’s team documents attitudes of impacted stakeholders. When developing Microsoft’s synthetic speech technology, she interviewed voice actors to understand how the technology could affect their livelihoods. As a result, Microsoft now requires customers of the service to obtain informed consent from any voice actors they employ.

“Ethical product development is not a box to check, but understanding the needs and concerns of your end users and other impacted stakeholders actually helps you innovate better,” she said.

Self-driving cars, which promise major safety and efficiency gains, rely on AI to perceive, interpret, and respond to road conditions and hazards. According to a report by the RAND Corporation, autonomous vehicles can save hundreds of thousands more lives if they are deployed en masse when they are 10% safer than the average human driver rather than waiting until they have nearly perfect safety records (Kalra, N., & Groves, D. G., The Enemy of Good, 2017). But getting the public on board may involve as many psychological roadblocks as technical ones, said Azim Shariff, PhD, an associate professor of psychology at the University of British Columbia who studies human-computer interaction and the ethics of automation.

Shariff’s research indicates that people demand much higher levels of safety from autonomous vehicles than from those operated by humans (Transportation Research Part C: Emerging Technologies, Vol. 126, 2021). This is due in part to “algorithm aversion”—our tendency to distrust decisions made by algorithms—and the “better than average” effect, where we overestimate our abilities compared with the general population (“Self-driving cars may be 10% better than average, but I’m 20% better”).

In fact, a focus on the safety gains associated with self-driving cars could backfire, said Shariff, because people also exhibit a “betrayal aversion,” or a reluctance to risk potential harm by something meant to enhance their safety.

“People really don’t like being hurt by things that are supposed to keep them safe,” he said. “If self-driving cars are sold primarily as safety mechanisms, people will overreact every time there’s an accident.”

On the other hand, Gopnik argues that designing safe and effective self-driving cars may be more complicated than we once thought—and require insights not just from physics but also from social psychology.

“Most of what people do when they drive is this amazing social coordination effort,” she said. “Getting machines to do things that may seem straightforward actually requires a much more sophisticated understanding of the world and each other than we initially realized.”

An adjunct, not an alternative

Ethical and behavioral considerations are just as important in the mental health care space, where AI tools serve two primary functions. Some algorithms operate behind the scenes to predict health risks or recommend personalized treatment plans and others interface directly with patients in the form of therapeutic chatbots.

The smartphone application Woebot, for example, uses machine learning and natural language processing to deliver cognitive behavioral therapy (CBT) to tens of thousands of daily users. By exchanging short text messages with a chatbot, users can address stress, relationship problems, and other concerns by learning about CBT concepts such as overgeneralization and all-or-nothing thinking (Fitzpatrick, K. K., et al., JMIR Mental Health, Vol. 4, No. 2, 2017).

Behind the scenes, AI technology fuels hundreds of therapeutic programs, such as the online therapy platform Talkspace, which has developed a suicide alert system that uses natural language processing to analyze written communication between patients and their therapists (Bantilan, N., et al., Psychotherapy Research, Vol. 31, No. 3, 2021) and is testing AI interventions for post-traumatic stress disorder (Malgaroli, M., et al., Journal of Medical Internet Research, Vol. 22, No. 4, 2020).

Some AI-based programs—including EndeavorRx, a video game designed to treat attention-deficit/hyperactivity disorder—have even received clearance from the U.S. Food and Drug Administration for use under medical supervision (Kollins, S. H., et al., npj Digital Medicine, Vol. 4, 2021).

Most psychologists see AI technologies as an adjunct, rather than an alternative, to traditional psychological treatment. “We’re not trying to replace therapists—there’s no replacement for human connection,” said psychologist Alison Darcy, PhD, the founder and president of Woebot Health. “But we can rethink some of the tools that have traditionally been the unique domain of the clinic and design them so that they are more accessible.”

AI therapeutic tools offer a few clear advantages over traditional mental health care. Machines are available 24 hours a day, they never get tired, they have an encyclopedic knowledge of the psychological literature, and they remember every interaction they’ve had with a client, said psychologist Skip Rizzo, PhD, director for medical virtual reality at the University of Southern California’s Institute for Creative Technologies. They can deliver treatments in real time and can be customized to meet a client’s preferences, including to enhance cultural competence. Digital therapeutic tools can also greatly lower the barriers to accessing mental health care by reducing cost and stigma.

But digital mental health is still a “wild west” in the nascent stages of research, application, and ethical issues, said David Luxton, PhD, a clinical psychologist and an affiliate associate professor at the University of Washington’s School of Medicine. Safety and efficacy are chief concerns, Rizzo added. Most platforms direct users toward support resources during a suspected mental health crisis—and include prominent disclaimers about intended use—but some people may still regard these tools as a substitute for therapy (Professional Psychology: Research and Practice, Vol. 45, No. 5, 2014).

“An app may be based on CBT, but that doesn’t mean that the app itself is evidence-based,” Luxton said. “People who are using it without a licensed therapist may be relying on something untested that could actually cause harm.”

Another problem afflicting both digital therapeutics and other AI products is “algorithmic bias”—when models make biased predictions because of limitations in the training data set or assumptions made by a programmer. Women, Black people, and Hispanic people are underrepresented in the field of computer science, and homogenous programming teams are more likely to make errors, for example making assumptions about educational attainment or health care access, that result in biased AI (Cowgill, B., et al., Proceedings of the 21st ACM Conference on Economics and Computation, 2020). But social scientists can anticipate such assumptions and help developers understand the lived experiences of populations represented in various data sets, said Turner-Lee.

“This is especially important when algorithms are applied in ‘sensitive use’ cases, including credit, employment, education, and health care,” she said.

On the other hand, AI models may hold the power to reduce health disparities. For example, osteoarthritis tends to be more painful for Black patients than non-Black patients, but standard tests only explain 9% of that variance. When a team of researchers used a machine learning algorithm—rather than a human grader—to analyze patients’ knee X-rays, they found physical indicators that explained 43% of the racial disparity in pain (Pierson, E., et al., Nature Medicine, Vol. 27, 2021).

Understanding how humans interact with technology is also key to the success of mental health chatbots. We know that a primary driver of change in therapy is the therapeutic relationship, but in the case of digital therapeutics, that relationship is between a human and a computer. Early research suggests that users can benefit from making emotional disclosures to a bot (Ho, A., et al., Journal of Communication, Vol. 68, No. 4, 2018) and even form a therapeutic bond (Darcy, A., et al., JMIR Formative Research, Vol. 5, No. 5, 2021). Thomas Derrick Hull, PhD, a psychologist who works with Talkspace and the behavioral weight-loss platform Noom, has also found that users tend to prefer interacting with chatbots when they aren’t disguised as humans.

Hull and his colleagues are exploring ways for AI technology to further enhance the process of psychotherapy by using the vast archives of anonymized data collected during Talkspace sessions. For example, natural language processing may be able to identify speech patterns that indicate a breakdown in the therapeutic alliance. A similar algorithm could compare session transcripts with treatment plans and nudge therapists to revisit a topic of concern with a client. AI also holds promise for improving the patient-therapist match, said Hull. By querying vast data sets, researchers may be able to better operationalize client characteristics, therapist characteristics, and what constitutes an ideal match.

“The qualities that make both patients and clinicians unique are critical in the context of treatment,” Hull said. “These characteristics are, however, understudied because the number and complexity is more than we could realistically track, model, and compare. AI can change that.”

Still, privacy concerns remain where data mining is concerned. In 2020, The New York Times reported that Talkspace executives read excerpts from therapy sessions during a company meeting without maintaining anonymity for the patient, who was an employee of the organization. Talkspace maintains it obtained the full consent of the client. APA’s Ethics Code and the Health Insurance Portability and Accountability Act require that health care data be fully de-identified before it is shared in order to preserve patient confidentiality.

“It behooves these companies to be very clear about what data might be mined and how they plan to use it,” said Deborah Baker, JD, APA’s director of legal and regulatory policy.

The next frontier

As increasingly sophisticated AI technologies—including autonomous weapons and emotion-detection software—continue to emerge, psychologists have an important role to play in launching them both effectively and responsibly.

For mental health care, the next frontier involves merging facial recognition, natural language processing, and ­emotion-detection algorithms to make complex assessments about mood and mental states, said Matteo Malgaroli, PhD, a clinical psychologist and assistant professor at New York University’s Grossman School of Medicine. These technologies are already being applied in marketing contexts, where the stakes are significantly lower.

“If you don’t buy my hamburger, I might lose a few dollars,” he said. “But if somebody makes a wrong assessment of depression, that can have very serious consequences.”

For that reason, it’s essential that psychologists participate in the development of clinical AI technologies to ensure algorithms capture data and deliver outcomes that are consistent with validated psychological practices, Malgaroli said.

Moving forward, AI holds the potential to empower traditionally marginalized populations, Sethumadhavan said. In an ongoing fellowship with the World Economic Forum’s AI and machine learning team, she is exploring how the technology can help meet the needs of the aging population, which will exceed 1.6 billion by 2050. AI may ultimately help address social isolation, transportation and mobility, mental and physical health, caregiver burden, and end-of-life planning for this group (AI and Ageing, World Economic Forum, 2021).

Ultimately, APA’s Ethics Code will help psychologists proceed with caution amid the growing questions about equity, security, and surveillance raised by AI technology.

“If the end user doesn’t trust the system, then it’s not going to work,” Luxton said. “Violating that trust risks the reputation of our entire profession.”

What is artificial intelligence?

AI technologies analyze massive amounts of information from their environments to solve problems with high levels of certainty.

Deep learning algorithms search for patterns in very large data sets to recognize variables that co-occur—for example, the content of a person’s text messages and the likelihood of a subsequent depressive episode.

Reinforcement learning systems complete many trials of a task (for instance, distinguishing between images of cats and dogs) to develop expertise.

Developers use a range of mathematical validation techniques to check whether their models are making accurate predictions about the real world. For example, it’s standard to test an algorithm on an existing data set with known outputs and then measure the model’s hit rate.

Common applications of AI include perceiving and responding to visual stimuli (“computer vision”), interpreting and producing human speech (“natural language processing”), and identifying patterns in very large data sets (“machine learning”).

Read More

APA Monitor APA Monitor