A Federal Law That Protects Competition but Permits Hate and Harassment Online Must Be Revised

Fixing a flawed Internet free speech regulation requires input from more than just tech giants such as Facebook and Twitter

Credit: Erin Scott Alamy 
one 2017 paper, updated in 2019, suggests that in cities where the online classified ad service Craigslist allowed erotic listings, the overall female homicide rate dropped by 10 to 17 percent. Although other researchers have contested the link between online advertising and greater safety, consensual sex workers have reported negative effects as a result of FOSTA-SESTA.

Joe Biden and Donald Trump have both called for outright repeal of Section 230. Others in Congress are proposing less radical changes, offering bills such as the Platform Accountability and Consumer Transparency (PACT) Act, which would require social media companies to disclose their moderation practices to show they are not arbitrary and to promptly take down content that a court deems illegal. The stricter takedown standard would favor wealthy companies such as Facebook, which can afford to employ armies of moderators and lawyers, and disfavor start-ups—just the problem Section 230 was meant to prevent. In addition, as they did in response to the laws intended to curtail sex trafficking, smaller platforms are likely to increase overly broad censorship of users to avoid legal challenges.

As digital-rights group the Electronic Frontier Foundation (EFF) points out, hobbling Section 230 could have a chilling effect on free speech online and make it much more difficult for new competitors to challenge the dominance of big tech. The EFF is not the only voice picking holes in legislation like the PACT Act: academics and other technology advocacy groups have offered measured critiques of the bill and proposed their own solutions for strategically modifying Section 230. One of their suggestions is to ensure the bill would apply only to platforms that host users directly—not to the companies providing background support for functions such as Internet access and payment processing—to protect the larger infrastructure of the Internet from legal liability. Another idea is to improve users’ ability to flag problematic content by working with legal authorities to develop a standardized reporting process that any platform could apply.

Input from experts like these—not just from billionaire CEOs such as Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey, the usual suspects when hearings are convened on Capitol Hill—is crucial to craft nuanced legislation that will give online platforms incentives to protect users from harassment and to suppress malicious content without unduly compromising free speech. If that happens, we might get Internet regulation right.

This article was originally published with the title “Politicians and Tech Billionaires Can’t Fix Social Media” in Scientific American 324, 1, 10 (January 2021)



Read More