On December 9th, the U.S. government advanced a groundbreaking lawsuit against Facebook, calling for the tech giant to be broken up in response to antitrust concerns. This comes on the heels of a growing bipartisan movement towards increasing regulation of America’s leading internet companies — and foreshadows a likely shift in policy under the incoming Biden administration. In this piece, the author suggests that Congress and the Executive Branch are likely to advance tech regulation in four key areas: protecting data privacy, requiring algorithmic transparency to identify and reduce bias, realigning growth incentives to address antitrust concerns, and holding platforms liable for facilitating the spread of harmful content. While any attempts at regulation will have to be undertaken thoughtfully, the substantial support for reform on both sides of the aisle — as well as strong track records on these issues from both Joe Biden and Kamala Harris — suggest that real change may be on the horizon.
Last week, we witnessed a watershed moment: On December 9th, the U.S. federal government advanced a major lawsuit against Facebook, accusing the company of anticompetitive behavior and arguing that WhatsApp and Instagram should be split off from Facebook. This is the first substantial move by the federal government to break up an internet company, and it sheds light on what technology policy may look like once Joe Biden becomes president.
Specifically, this lawsuit is the latest indication of growing bipartisan concern about the concentration of power held by tech companies. Earlier this year, the Democrat-led House held an antitrust hearing looking at allegations of anti-competitive behavior among four top tech companies, while the Justice Department, led by Trump appointees, advanced another antitrust lawsuit against Google. Last week’s suit grew out of recent investigations by the Federal Trade Commission (FTC), and was advanced by two Democratic FTC commissioners alongside Republican Chairman Joseph Simons.
This bipartisan support for increased regulation stems in large part from a dramatic shift in public sentiment on this issue. The impact of social media on both the 2016 and 2020 elections has led American citizens to increasingly see misinformation, privacy, and excessive market power as serious public policy concerns — concerns that many feel should be addressed by Washington. In fact, according to a recent Consumer Reports survey, roughly three in four Americans “worry about the power wielded by today’s biggest tech platforms.”
Given these trends, how might we expect policy to shift under a Biden administration? To answer this question, it is necessary first to unpack the variety of interconnected issues surrounding leading internet platforms such as Facebook, Google, and Twitter. Some of these issues include:
- Intentional foreign and domestic disinformation efforts, in which bad actors promote political falsehoods on social media networks for illegitimate political gain
- The rampant spread of misinformation and conspiracy theories, where individuals unknowingly spread misleading or false content to the detriment of public discourse and the political process
- The proliferation of hateful and violent content, such as online incitement in Myanmar that the United Nations has asserted constituted the facilitation of genocide
- The radicalization of voting segments and political factions in democracies around the world
- The tendency of AI systems that curate social content and target users with personalized digital ads to create algorithmic bias
To be sure, the people running these tech companies likely never intended for their platforms to be used in this manner. But given their current business models, there seems to be little chance that these problems will be solved without outside intervention. As such, Congress and the White House are likely to focus on reforming certain key elements of these companies’ business practices that engender this problematic activity, with policies aimed at:
- Protecting consumer privacy to by limiting the uninhibited collection and use of personal data for behavioral profiling
- Requiring algorithmic transparency to reveal how and why social media posts and ads target different individuals
- Leveraging antitrust policies to realign platforms’ growth incentives, limiting the potential for anticompetitive conduct that keeps would-be rivals from entering the market
- Ensuring platforms are held liable when they facilitate the spread of (or fail to effectively moderate) harmful content — without infringing on companies’ and users’ freedom of expression.
While many of these trends are long-standing, recent events suggest that Washington is likely to make significant progress in each of these areas in the coming years.
Over the last several years, governments around the world have shown increasing interest in addressing data privacy concerns. In 2018, the EU’s landmark General Data Protection Regulation (GDPR) significantly increased requirements for how consumer data is stored and shared, and California soon followed with its Consumer Privacy Act (CPA). While some privacy advocates have suggested that the CPA does not go far enough to protect privacy, it remains the most stringent consumer privacy law on the books in the United States — and it could function as a model for nationwide privacy legislation that would offer all Americans baseline protections.
While algorithms that identify relevant content for consumers can be useful, they also create a bubble effect that has become increasingly problematic for Americans across the political spectrum. On the right, Americans have made allegations of anti-conservative bias on Facebook and Twitter, while liberals have suggested that platforms like YouTube have not done nearly enough to contain the spread of conspiracies, misinformation, and disinformation. In response, congressional Republicans and Democrats alike have clamored for transparency into how exactly these algorithms work, so that researchers, the journalistic community, and the public at large can better understand how content is served (and identify and address cases where these platforms are systematically spreading biased or inaccurate information).
Specifically, a bipartisan group including Sen. Mark Warner, Sen. Amy Klobuchar, Sen. Lindsay Graham, and the late Sen. John McCain called for greater transparency in digital political advertising several years ago — and similar efforts are likely to grow in number and scope under Biden. The incoming administration is likely to seek the transparency reforms that the Democratic party has pushed for since the 2016 election, both as a show of support for a long-standing Democratic issue and to signal a refocusing on social justice concerns that many feel have been sidelined under Trump. Kamala Harris, for her part, has introduced legislation as a senator to advance diversity initiatives in tech jobs — which would help to begin addressing algorithmic bias in the tech industry.
Today’s digital economy is dominated by just a handful of brands: Facebook, Amazon, Google, Apple, and Microsoft. As a result, policy experts, legal scholars, and economists have become increasingly concerned about the monopolization of vital consumer markets such as search, social media, web-based text messaging, e-commerce, and email. Monopolies are classically known to harm economies in three key ways: a slowed pace of market innovation, exploitative rent extraction at the expense of the rest of society, and diminished quality of service — and there’s reason to think all three of those trends are at play today.
That’s part of why we’ve seen not just the recent Facebook suit, but also the preceding House antitrust report and corresponding hearing, as well as the Justice Department’s substantial allegations against Google. All of these events point to a growing trend of leveraging trustbusting to take on internet giants, and that is progress on which the Biden administration is sure to build. Of course, breaking up tech companies on its own isn’t going to solve these wide-ranging problems, but it does offer a starting point for addressing the outsized power that many of these platforms currently hold.
Content Moderation and Liability
Both President Trump and President-Elect Biden have independently suggested that Section 230 of the Communications Decency Act must be reconsidered and reconstructed. The law, originally passed in 1996, in essence affords internet platform companies immunity from liability for almost all forms of objectionable (or even illegal) user-generated content that flows over their platforms. While this was intended to ensure platforms could offer a truly diverse, unfettered forum for public discourse relying on voluntary self-moderation, it has in practice meant that anything from lies about sitting politicians to death threats and myriad other forms of harmful content can be knowingly disseminated — and platforms are within their rights to leave the content up. In fact, in his speech at Georgetown last year, Mark Zuckerberg effectively stated that politicians are free to intentionally spread lies on Facebook’s platforms, and that they will not be subject to moderation.
In response to issues like these, we have seen tremendous bipartisan support for introducing greater liability on popular platforms like Facebook and Twitter, essentially forcing companies to ensure that the discourse taking place on their platforms meets certain standards. Some legislation has already been introduced that would force transparency in content moderation on social media platforms, while others have suggested that carve-outs from the blanket liability shield offered by Section 230 should be considered for content that is particularly damaging to society, such as known disinformation and explicit, hateful content. And while many platforms have already instituted similar standards as internal corporate policies — including Twitter’s hate speech policies and Facebook’s affirmation that it will take down known disinformation disseminated by the Russia-based Internet Research Agency — these firms don’t always live up to their own (self-enforced) standards. As such, a bipartisan federal law, if established thoughtfully and enforced effectively, could introduce meaningful liability and ultimately reduce the spread of harmful content without impeding free expression.
These issues are nuanced, and potential reforms are sure to invite tremendous scrutiny from the right, the left, and everyone in between. And that is as it should be. Though the current system is clearly in need of an update, it will be essential to avoid rash, hastily-developed policies that could worsen the problems they aim to solve. The internet has provided unparalleled value to America and the entire world, and much of that growth is the direct result of the free-market approach that the United States took toward regulation more than two decades ago. As we move forward into a new chapter of digital regulation, it will be essential to balance free market ideals with evolving technological and political realities. If there is one thing that the American democratic system has placed above markets time and again, it is the protection of democracy itself — and the internet industry is no exception.