Why Google’s Supreme Court Case Could Rattle the Internet
On Tuesday the Supreme Court began hearing arguments in a case called Gonzalez v. Google, which questions whether tech giants can be held legally responsible for content promoted by their algorithms. The case targets a cornerstone of today’s Internet: Section 230, a statute that protects online platforms from liability for content produced by others. If the Supreme Court weakens the law, platforms may need to revise or eliminate the recommendation algorithms that govern their feeds. And if the Court scraps the law entirely, it will leave tech companies more vulnerable to lawsuits based on user content.
“If there are no protections for user-generated content, I don’t think it’s hyperbolic to say that this is probably the end of social media,” says Hany Farid, a computer scientist at the University of California, Berkeley. Social platforms, such as Twitter and YouTube, rely heavily on two things: content created by users and recommendation algorithms that promote the content most likely to capture other users’ attention and keep them on the platform as long as possible. The Court’s verdict could make either or both strategies more dangerous for tech companies.
Gonzalez v. Google originated in the events of November 2015, when armed men affiliated with the terrorist organization ISIS killed 130 people in six coordinated attacks across Paris. Nohemi Gonzalez, a 23-year-old student, was the only American to die in the attacks. In the aftermath, her family sued Google, which owns YouTube, arguing that the video platform’s recommendation algorithm promoted content from the terrorist group.
Google argues that using algorithms to sort content is “quintessential publishing,” something necessary for users to be able to navigate the Internet at all, and therefore protected under Section 230. That statute, which was originally part of the Communications Decency Act of 1996, states that, under law, computer service providers cannot be treated as the publishers of information created by someone else. It’s a measure dating to the early days of the Internet that was meant to keep technology companies from intervening heavily in what happens online.
“This law was designed to be speech-maximizing, which is to say that by giving companies pretty broad immunity from liability, you allow companies to create platforms where people can speak without a lot of proactive monitoring,” says Gautam Hans, an associate clinical professor of law at Cornell Law School.
Gonzalez argues that recommendation algorithms go beyond simply deciding what content to display, as “neutral tools” like search engines do, and instead actively promote content. But some experts disagree. “This distinction just absolutely does not make sense,” says Brandie Nonnecke, a technology policy specialist and director of the CITRIS Policy Lab, headquartered at U.C. Berkeley. She contributed to a brief about the case that argues that both types of algorithms use preexisting information to determine what content to show. “Differentiating the display of content and the recommendation of content is a nonstarter,” Nonnecke says.
In deciding Gonzalez v. Google, the Supreme Court can follow one of three paths. If the Court sides with Google and declares that Section 230 is fine as is, everything stays the same. At the most extreme, the Court could toss all of Section 230 out the window, leaving tech giants open to lawsuits over not just content that their algorithms recommend but also whatever users say on their sites.
Or the Court can take a middle path, adapting the statute in a specific way that could require technology companies to face some additional liability in specific circumstances. That scenario might play out a bit like a controversial 2018 modification to Section 230, which made platforms responsible for third-party content tied to sex trafficking. Given the constraints of Gonzalez v. Google, modifying Section 230 might involve changes such as excluding content related to terrorism—or requiring companies to rein in algorithms that push ever more extreme content and that prioritize advertising gains over the interests of users or society, Farid says.
Hans doesn’t expect the Supreme Court to release its decision until late June. But he warns that if Section 230 falls, big changes to the Internet will follow fast—with ripples reaching far beyond YouTube and Google. Technology platforms, already dominated by a handful of powerful companies, may consolidate even more. And the companies that remain may crack down on what users can post, giving the case implications for individuals’ freedom of speech. “That’s the downstream effect that I think we all should be worrying about,” Hans says.
Even if the Supreme Court sides with Google, experts say momentum is building for the government to rein in big tech, whether through modifying Section 230 or introducing other measures. Hans says he hopes Congress takes the lead, although he notes that lawmakers have not yet succeeded in passing any new legislation to this end. Nonnecke suggests that an alternative approach could focus on giving users more control over recommendation algorithms or a way to opt out of sharing personal information with algorithms.
But the Supreme Court doesn’t seem likely to step away from the issue, either. A second case being argued this week, called Twitter v. Taamneh, also looks at tech platforms’ liability for proterrorism content. And as early as this fall, experts expect the Supreme Court to take up cases that explore two conflicting state laws about content moderation by social media platforms.
“No matter what happens in this case, regulation of technology companies is going to continue to be an issue for the Court,” Hans says. “We’re still going to be dealing with the Supreme Court and technology regulation for a while.”
ABOUT THE AUTHOR(S)
Meghan Bartels is a science journalist and news reporter for Scientific American who is based in New York City.