China publishes code of ethics to regulate Artificial Intelligence, what would Isaac Asimov say?

This article was translated from our Spanish edition.

Opinions expressed by Entrepreneur contributors are their own.

You’re reading Entrepreneur United States, an international franchise of Entrepreneur Media.

China’s Ministry of Science and Technology published a code of ethics that aims to regulate existing or developing Artificial Intelligence (AI) models. With this, the Asian country is ahead of Europe, which already had a prototype regulation in the same sense. The guidelines of the Chinese guide prioritize the ‘full power of decision’ of humans over machines, very much in line with the laws of robotics of the writer Isaac Asimov .

Depositphotos.com

Last April, the European Union presented the preliminary draft of a regulation to ensure that humans have control over AI. However, this has not materialized and now China is a pioneer in launching a regulation for these booming technologies.

As reported by The South China Morning Post , the document entitled Ethical Specifications for New Generation Artificial Intelligence starts from a very clear premise: “Ensure that AI is always under the control of human beings” and that they have “full decision-making power ” About AI.

“Ultimately, China is opting for a heavy-handed model, where the state is thinking very seriously about the long-term social transformations that AI will bring, from social alienation to existential risks, and actively trying to manage and guide these transformations , ”Rebecca Arcesati, analyst at the German think tank Mercator Institute for China Studies, told the same media. “They have a remarkably progressive mindset ,” he added.

#ElonMusk talked about the technologies that, according to him, have the greatest future. Do you agree with the millionaire? https://t.co/wkiTPy0Kgm

– Entrepreneur in Spanish (@SoyEntrepreneur) October 4, 2021

What does Isaac Asimov have to do with China’s code of ethics for AI?

In 1942, long before Artificial Intelligence was a common theme, science fiction writer Isaac Asimov first published his famous three laws of robotics in the short story ‘The Vicious Circle’.

These ‘laws’ were key in Asimov’s works and have served as the basis for everything that has to do with this genre: novels, movies, series, comics and more, as they are precisely an ethical guide for androids to interact with humans without being harmful.

  • First Law. A robot will neither harm a human being nor, by inaction, allow a human being to be harmed.
  • Second Law. A robot must carry out the orders given by human beings, except for those that conflict with the first law.
  • Third Law. A robot must protect its own existence to the extent that this protection does not conflict with the first or second law.

The guidelines proposed by Isaac Asimov are intended to protect humans in the hypothetical case that machines could rebel and attack their creators . If a robot, which we now know would operate with Artificial Intelligence , tried to disobey these laws, its system would go into ‘self-destruct’ mode.

In other words, the three laws of robotics represent the moral code of an AI and are so ingrained in popular culture that they were probably the inspiration for the new Chinese regulations.

The Six Points of China’s Code of Ethics for New Generation Artificial Intelligence

The document describes six basic principles for Artificial Intelligence systems , including ensuring that they are “controllable and reliable .” The other points cover the use of these technologies to improve human well-being ; promote equity, transparency and justice ; protect privacy and security ; and increase ethics education .

#Technology Experimenting with robotics and having your own electronic dog like Jimmy Neutron is now possible. https://t.co/3mWKCN9OYP

– Entrepreneur in Spanish (@SoyEntrepreneur) October 1, 2021

According to the code, users will have every right to accept or reject the service of an AI , as well as to stop interactions with these systems whenever they want.

They also raise the need to avoid risks by ensuring that AIs do not have vulnerabilities or security breaches , and that they are not used in illegal or illicit activities that may compromise “national security” or “the general interest .”

This last point is the most controversial, since it is aligned with the policies imposed by the Chinese government in the last year, aimed at having more control over the country’s technology sector .

In fact, China recently launched against content recommendation algorithms , the vast majority based on AI systems. They collect and analyze user data to target advertising or determine what content is presented to them in their inboxes or search engines.

For this reason, Arcesati affirms that the publication of the code is “a clear message” for the technological giants such as Amazon , Facebook , Google and all the companies that “have based their business model on recommendation algorithms .”

It should be remembered that one of China’s goals is to be a leader in Artificial Intelligence by 2030 , the publication notes. Perhaps this is why the urgency of ethically regulating all current and future types of AI.

Meanwhile, in the European Union , activist groups fear that Artificial Intelligence will be used for authoritarian or mass surveillance purposes . In fact, one of its main demands is that facial recognition systems be banned in its future AI Regulation.

Read More

Mairem Del Rio