7 ways the technology sector could support global society in 2022

Image Credit: Hiroshi Watanabe/Getty Images

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Some of the excesses of 2021 have shown us how digital technologies can undermine what philosophers call future “human flourishing.” A lot has been written on this topic in the first few days of the new year, but take two examples — MIT Technology Review’s list of the worst excesses of technology and Fast Company’s 5 best and worst tech moments of 2021 — and it’s evident how little power people affected by technologies have when things go wrong under current systems.

What’s also clear as we enter 2022 is that global tolerance for technology’s unchecked disruption of societal institutions, conventions, and values is waning. This is the year governments will pass legislation to control the effects of digital technologies on societies, across many jurisdictions and in relation to numerous existing and emergent technologies. The EU AI and Digital Services Acts, the UK Online Safety Bill, and the US SAFE TECH Act are just a few of the efforts underway.

Legislation is a marker of societal concern, but it’s also clear that non-specialist, “ordinary” people have an increasingly sophisticated understanding of the relationship between technology and society. Whether you are a liker or hater of the absurdist satire Don’t Look Up that debuted in December, it holds up a mirror to the importance of goal setting by big tech and how that’s shaping our civilization. The film’s dilemma ‘Do we act to save the planet or the comet’s valuable mineral resources?’ could equate to ‘Do we make technologies work for corporate or societal goals?’

The complex environment that Duke professor Peter Haff called the “technosphere” is ripe for change, and it’s not an easy, technical fix. It will require us to think big — beyond current frameworks and at scale — and to make international changes that complement and support each other, for example to avoid creating regulation-free domains (“Switzerlands” of technology development).

Economist Joseph Stiglitz’s insight that the system that brought us regulation also brought us regulatory capture should make us look hard at solutions that only further encode corporate and societal power imbalances. As anthropologist David Graeber pointed out, “The ultimate hidden truth of the world is that it is something we make, and could just as easily make differently.”

In the spirit of bold new beginnings and thinking big, here are seven suggestions for different goals, approaches, and behaviors that the technology sector could adopt in 2022, which would support global society rather than corporate goals.

1. Reduce its carbon footprint 

The technology sector has long wanted to be seen as leading the world in decarbonizing and enabling other sectors to become more energy efficient. But research still shows technology companies routinely underrepresent their carbon footprints by failing to account for emissions in value chains, from raw material extraction to end-product use. The seemingly frivolous excesses of blockchain uses like Dogecoin or non-fungible tokens (NFTs), metaverses or entrepreneur-driven moonshots evidence a value system that still overlooks the enormous energy overhead that underpins every use of technology.

2. Be transparent about technological progress 

Misrepresenting the realities of technologies like artificial intelligence to support an aspirational futurist vision of technology is still endemic, as is failing to acknowledge when real-world technologies don’t deliver on expectations. For example, evidence shows that digital contact tracing apps have not made the substantial contribution to protecting public health that was hoped for, autonomous vehicles still have a higher rate of accidents than human-driven cars, and emerging products like Web 3.0 are surrounded by confusing hype.

The dominant media narrative is that technology is driving humanity along a trajectory of progress, interrupted by occasional real-world failures. It might benefit us more to see technology companies as one among many contributors to the future of human potential — alongside a myriad of expert philosophers, engineers, humanitarian workers, particle physicists — while recognizing that chasing goals that benefit their corporate structures more than they benefit society might be a significant distraction from the task at hand.

3. Work with regulators 

International regulators are jumping through hoops to rein in multinational technology companies. The EU AI Act is an ambitious attempt to set an international standard for the development of trustworthy AI through risk-based categories; the UK Online Safety Bill and the EU Digital Services Act take the route of requiring standardized transparency reporting; and the US SAFE TECH Act aims to reaffirm civil rights, victims’ rights, and consumer protections.

States like California, Virginia, and Colorado were early adopters of privacy protection laws, but investigative journalism has uncovered “a lobbying juggernaut” that identifies international privacy regulation as a target and gives companies like Amazon huge influence over the drafting of legislation. Meta (formerly Facebook) is publicly adamant that it wants regulation, but the informed view says the company wants to retain credibility in policy while steering legislators to areas where it’s comfortable seeing tighter government controls.

Regulators are responding by hiring the best and brightest from industry and ethical research — for example, the FTC has shored up its AI Strategy Group with academic experts in economics, law, technology, and policy. These are people who are equipped to tackle this complex task, because they understand the technical architectures and processes of technology. They are working to societal goals and will benefit from industry support.

4. Cocreate better practices 

Without the support of industry to make them fit, the cycles of unadopted ethics frameworks (most recently the UNESCO recommendations) will continue. And the parable of Alexa and the penny (as Meredith Whittaker points out) demonstrates that the fundamental issues baked into the relationship between AI and society can’t be solved through practices like engineering hygiene and algorithmic auditing alone.

But measures that support a better understanding of how technologies work are gaining traction, like algorithmic transparency standards for public-sector bodies developed by the UK Government. Moving from transparency to recourse leads to accountability mechanisms, and the creation of a relational dynamic with regulators, affected communities, and members of the public that enables them to participate meaningfully in how technologies are designed and deployed. More radical ideas, like enforcing interoperability of platforms and portability of data to rebalance the power dynamic towards users, could change technology, business models, and people’s relationships with digital tools and services.

5. Cooperate with independent research 

The corporate capture of research, simply put, means that you’re damned if you take the tech company dollar and excluded from access to technologies and data if you don’t. Witness Facebook rescinding access to NYU researchers looking into political ad targeting data. Or Timnit Gebru’s expulsion from Google over the Stochastic Parrots paper exposed the company’s problematic relationship with ethical research that undermines its carefully crafted narratives. To ensure that technology companies aren’t able to undervalue or coopt peer-reviewed research practices, that power balance will need to be reset.

New, independent and community-rooted organisations like Gebru’s DAIR Institute, the Minderoo Foundation’s frontier technology network nodes in Oxford, Cambridge, New York, Los Angeles, and Western Australia, and the Ada Lovelace Institute in London (where I work) are vital players, monitoring bad practice, applying pressure, and effecting change. To be truly impactful, they will need research access to technology companies’ practices, policies, and infrastructures.

6. Understand and value its employees 

Tech worker whistleblowers continue to emerge: Frances Haugen recently provided evidence to the US Congress and the European Parliament that Facebook was aware its platform services were amplifying misinformation. But these individual acts of bravery are not enough to shift the power imbalances between corporate structures and workers perpetuated by entrenched corporate cultures and practices. To disrupt the corporate capture of ethics and support those working in technology, we need a more nuanced understanding of the moral and ethical positioning of tech workers. And we need to mobilize this understanding to empower tech workers to act within companies as directed by their own moral compasses.

7. Build trustworthy technologies that engender trust 

Conversations about public understanding of technologies frequently focus on trust — that the public needs to be made to trust technologies. But research shows that distrust is frequently a rational response, and it would be better to move the focus to changing industry practices, and building companies and technologies that are demonstrably worthy of trust.

Many technology companies genuinely want to build products that support human flourishing, but if some companies’ primary goals are mediated through shareholder interests, they risk provoking a further backlash in public trust (2021 Pew Research Center data says 56% of Americans support more regulation of technology companies), and a third “AI winter.”

The advent of legislation in 2022 will be a giant leap forward. It will begin to move the needle towards rebalancing power dynamics — for example towards historically oversurveilled and disadvantaged communities. But such rebalancing will only find real traction with the support of all technology companies, particularly those who hold the knowledge of the code and processes that determine technologies and have the power to create or prevent change.

Octavia Reeve is Interim Lead at the Ada Lovelace Institute.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Read More

Octavia Reeve Ada Lovelace Institute