The regulation of AI is too important to be left only to technologists



Opinion




Regulating technology is about safety, but it is also about the kind of civilisation we wish to create for ourselves.

Summary

  • Its ethics and safety need attention but we must think about what kind of society we want to create. Since civilization as we know it could be at stake, people from all walks of life need to debate AI and join the effort to frame rules.

Who decides who decides? It is a question we will ask often in 2024. With the stunning debut of ChatGPT, 2023 was the year when AI became a buzzword, and LLMs took over the discourse around us. With AI deepfakes threatening major democratic elections, the fear of AI-powered autonomous drones and other weapons being used in wars fought by humans, and with corporations ready to lay off workers as their jobs get ‘replaced’ by AI, the debate on AI ethics and safety will become even more heated. Every major country and global grouping has jumped into the race for policing AI; the regulatory air is still hazy with various proposals, principles and guidelines. Although almost everyone seems to agree that this shape-shifting technology needs to be controlled and guard-railed, it is not clear whose approaches and guidelines will be accepted. Who decides that? More importantly, who decides who decides?

In my opinion, the debate is beyond just AI safety or ethics. It is much more fundamental than that—it’s about what kind of future we want for humanity. Technologies have come and gone, but this one is fundamentally different. Gartner aptly says that AI is not a mere technology or trend, it is a fundamental shift in how humans and machines will interact with each other. A few years back, in those halcyon pre-GPT days, I mused on the frog-in-boiling-water parable, and whether technology was that slowly boiling water which eventually incapacitates the frog. We forgot our dear ones’ phone numbers with mobile phones, we have forgotten directions with Google Maps, and we are forgetting what libraries look like with search and chatbots. Most children today do not know that vegetables actually grow somewhere other than a supermarket. Soon autonomous cars might make us forget how to drive. Will we turn into ineffectual vegetables, spending our time playing video games and ‘consuming content,’ while AI robots and algorithms do our work for us? Is that a future we are creating?

The debate, therefore, is not just about regulating AI, it is about the future we want for ourselves. Consider the recent scriptwriters strike in Hollywood, where writers feared the loss of their jobs to AI. But as Jamie Susskind writes in the Financial Times (bit.ly/48m8DvN): “Is the point of cinematic art to provide a living for people in the film industry? Or is it to provide stimulation and joy for consumers?” Would we want human producers of films just to keep them employed? “A similar debate,” says he, “is playing out in the world of literature.” While Margaret Atwood and Stephen King are worried that there works are being used to train AI systems, would it not be wonderful for AI to write like them after they are long gone, for Beethoven to continue producing wonderful music, or Rabindranath Tagore to continue to produce his masterpieces even when he is not there? When Snoop Dogg and Dr. Dre produced a hologram of Tupac Shakur in 2012, 15 years after Shakur’s death, did it not excite the humans who wanted him to come back? Does all creativity and expression need to come from human beings if AI can created something much better?

This debate came home to me yet again when I watched a recent revolution in gemstones. The market has been upturned by cheap lab-made diamonds that are identical to mined stones. Is it a good thing that now everyone can buy these beautiful lab-made gemstones, or is it bad that we are replacing something nature perfected over thousands of years with stones born of furnace-like temperatures and crushing pressure? As Susskind writes: “Is art’s purpose merely to venerate and compensate artists, or to provoke aesthetic stimulation and cultural advance? These aren’t easy questions. And they can’t simply be answered by reference to ‘safety’ either. These debates are about values. They ask us to choose, in Amos Oz’s words, between ‘right and right’. Regulating technology is about safety, but it is also about the kind of civilisation we wish to create for ourselves. We can’t leave these big moral questions for AI companies (like OpenAI) to decide.”

Or even governments and regulators. We need people who think and work beyond just the technologies of AI; we need historians and philosophers, humanists and naturalists, sociologists and musicians. People who live and breathe AI technology are the wrong kind of people to decide the future we want, since all they know is AI. Paraphrasing C.L.R. James in his immortal Beyond the Boundary: What do they know of AI, who only AI know.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint.
Download The Mint News App to get Daily Market Updates.

more

Read More

Jaspreet Bindra