Scared about the threat of AI? It’s the big tech giants that need reining in | Devdatt Dubhashi and Shalom Lappin

‘Meta, through its ownership of Facebook, WhatsApp and Instagram, together with Google, which controls YouTube, dominate much of the social media industry.’ Photograph: Tony Avelar/AP

Opinion

Social media companies’ algorithms enable the spread of extremism and social chaos. The case for regulating them is clear

Thu 16 Dec 2021 10.45 GMT

In his 2021 Reith lectures, the third episode of which airs tonight, the artificial intelligence researcher Stuart Russell takes up the idea of a near-future AI that is so ruthlessly intelligent that it might pose an existential threat to humanity. A machine we create that might destroy us all.

This has long been a popular topic with researchers and the press. But we believe an existential threat from AI is both unlikely and in any case far off, given the current state of the technology. However, the recent development of powerful, but far smaller-scale, AI systems has had a significant effect on the world already, and the use of existing AI poses serious economic and social challenges. These are not distant, but immediate, and must be addressed.

These include the prospect of large-scale unemployment due to automation, with attendant political and social dislocation, as well as the use of personal data for purposes of commercial and political manipulation. The incorporation of ethnic and gender bias in datasets used by AI programs that determine job candidate selection, creditworthiness, and other important decisions is a well-known problem.

But by far the most immediate danger is the role that AI data analysis and generation plays in spreading disinformation and extremism on social media. This technology powers bots and amplification algorithms. These have played a direct role in fomenting conflict in many countries. They are helping to intensify racism, conspiracy theories, political extremism and a plethora of violent, irrationalist movements.

Such movements are threatening the foundations of democracy throughout the world. AI-driven social media was instrumental in mobilising January’s insurrection at the US Capitol, and it has propelled the anti-vax movement since before the pandemic.

Behind all of this is the power of big tech companies, which develop the relevant data processing technology and host the social media platforms on which it is deployed. With their vast reserves of personal data, they use sophisticated targeting procedures to identify audiences for extremist posts and sites. They promote this content to increase advertising revenue, and in so doing, actively assist the rise of these destructive trends.

They exercise near-monopoly control over the social media market, and a range of other digital services. Meta, through its ownership of Facebook, WhatsApp and Instagram, and Google, which controls YouTube, dominate much of the social media industry. This concentration of power gives a handful of companies far-reaching influence on political decision making.

Given the importance of digital services in public life, it is reasonable to expect that big tech would be subject to the same sort of regulation that applies to the corporations that control markets in other parts of the economy. In fact, this is not generally the case.

The social media agencies have not been restricted by the antitrust regulations, truth in advertising legislation, or laws against racist incitement that apply to traditional print and broadcast networks. Such regulation does not guarantee responsible behaviour (as rightwing cable networks and rabid tabloids illustrate), but it does provide an instrument of constraint.

Three main arguments have been advanced against increased government regulation of big tech. The first holds that it would inhibit free speech. The second argues that it would degrade innovation in science and engineering. The third maintains that socially responsible companies can best regulate themselves. These arguments are entirely specious.

Some restrictions on free speech are well motivated by the need to defend the public good. Truth in advertising is a prime example. Legal prohibitions against racist incitement and group defamation are another. These constraints are generally accepted in most liberal democracies (with the exception of the US) as integral to the legal approach to protecting people from hate crime.

Social media platforms often deny responsibility for the content of the material that they host, on the grounds that it is created by individual users. In fact, this content is published in the public domain, and so it cannot be construed as purely private communication.

When it comes to safety, government-imposed regulations have not prevented dramatic bioengineering advances, like the recent mRNA-based Covid vaccines. Nor did they stop car companies from building efficient electric vehicles. Why would they have the unique effect of reducing innovation in AI and information technology?

Finally, the view that private companies can be trusted to regulate themselves out of a sense of social responsibility is entirely without merit. Businesses exist for the purpose of making money. Business lobbies often ascribe to themselves the image of a socially responsible industry acting out of a sense of concern for public welfare. In most cases this is a public relations manoeuvre intended to head off regulation.

Any company that prioritises social benefit over profit will quickly cease to exist. This was showcased in Facebook whistleblower Frances Haugen’s recent congressional testimony, indicating that the company’s executives chose to ignore the harm that some of their “algorithms” were causing, in order to sustain the profits they provided.

Consumer pressure can, on occasion, act as leverage for restraining corporate excess. But such cases are rare. In fact, legislation and regulatory agencies are the only effective means that democratic societies have at their disposal for protecting the public from the undesirable effects of corporate power.

Finding the best way to regulate a powerful and complex industry like big tech is a difficult problem. But progress has been made on constructive proposals. Lina Khan, the US federal trade commissioner advanced antitrust proposals for dealing with monopolistic practices in markets. The European commission has taken a leading role in instituting data protection and privacy laws.

Academics MacKenzie Common and Rasmus Kleis Nielsen offer a balanced discussion of ways in which government can restrict disinformation and hate speech in social media, while sustaining free expression. This is the most complex, and most pressing, of the problems involved in controlling technology companies.

The case for regulating big tech is clear. The damage it is doing across a variety of domains is throwing into question the benefits of its considerable achievements in science and engineering. The global nature of corporate power renders the ability of national governments in democratic countries to restrain big tech increasingly limited.

There is a pressing need for large trading blocs and international agencies to act in concert to impose effective regulation on digital technology companies. Without such constraints big tech will continue to host the instruments of extremism, bigotry, and unreason that are generating social chaos, undermining public health and threatening democracy.

  • Devdatt Dubhashi is professor of data science and AI at Chalmers University of Technology in Gothenburg, Sweden. Shalom Lappin is professor of natural language processing at Queen Mary University of London, director of the Centre for Linguistic Theory and Studies in Probability at the University of Gothenburg, and emeritus professor of computational linguistics at King’s College London.

Read More

Devdatt Dubhashi and Shalom Lappin