China’s AI tech leaves aside questions of ethics

TOKYO — Artificial intelligence, like other forms of technology, reflects the culture and values of the people who create it and those who provide the data frameworks upon which it is built. AI technology developed in different countries or organizations may thus offer different answers to the same problem.

On June 25, the National Security Commission on Artificial Intelligence, an independent U.S. government body, released, “The Role of AI Technology in Pandemic Response and Preparedness: Recommended Investments and Initiatives.” The report makes 10 recommendations, including calls for the creation of a federal “Pandemic Preparedness Dataset” and investment in “the digital modernization of state and local health infrastructure required for effective disease surveillance.”

It was the commission’s third report published since May 6 on AI and the government’s response to the COVID-19 pandemic. The commission’s so-called COVID-19 White Paper Series deals with fundamental ethical issues related to the use of AI in policy responses to the crisis, including privacy and the responsibilities of software developers.

The commission has tried to help establish clear government positions on these issues, and to present them to Congress and the American people. This is time-consuming, but despite the urgency of the pandemic, the government must follow democratic processes when deciding how to spend taxpayer money.

China, by contrast, has been very quick to wield AI in its battle against the disease. AI has been used to identify individuals in contact tracing and to predict how the virus spreads.

Infection rates for the novel coronavirus are far over 15,000 per million people in the U.S. and Brazil, as of Thursday. The official figure for China, where the pandemic started, is 62, although there is some doubt about the credibility of the Chinese data.

There is no denying that a society that puts the collection of personal data needed to track a virus ahead of the protection of privacy can, in theory, do a better job containing an outbreak of infectious disease.

The pandemic has brought to the fore wide differences in values and cultural norms between nations. Shigeo Kawashima, an associate professor at Aoyama Gakuin University who studies the interaction between society and technology, believes the world has reached a historical crossroads with regard to the evolution of AI. The question, Kawashima says, is “which country will be able to build an AI-driven society where people, not machines, play the central role.”

Human interaction with technology is not culturally universal or value-neutral.

“Western society, which is based on monotheistic Christianity and Judaism, and Chinese society, which is based on materialism and Marxism, naturally have different ethical systems,” Kawashima said.

An emerging field of research called explainable AI, or XAI, highlights this. XAI refers to a way of applying AI that ensures the process of AI-based decision-making can be understood by humans. XAI can “explain” how it makes specific decisions.

The U.S. Defense Department’s Defense Advanced Research Projects Agency, or DARPA, is leading the way in this area of research. The U.S. military is keen to develop computer systems that produce transparent explanations for the decisions made by AI systems, even though they tend to be reluctant to disclose their activities in general. This is because autonomous weapons systems and even autonomous planning of military operations are already at the stage of practical application in the U.S.

If machines used by the military make choices that affect human lives — whether to bomb a particular target, say — the military has a responsibility to explain how and why those decisions are taken. The military must also fully understand and be able to explain how and why specific decisions have been made so it can trust the actions of AI systems themselves. In other words, a military can’t simply say, “Oh sorry, the Ai decided it, not us human.”

XAI is a means of tackling the “black box” problem in machine learning, in which even system designers cannot explain how their AI arrives at specific decisions.

To take an innocuous example, imagine a system that uses AI to determine whether an image is that of a cat or a dog. To create such an AI system, engineers train computers to recognize common patterns using a huge number of images of cats and dogs, exposing them to numerous variations, and specifying whether the image in question is a cat or a dog each time.

As they are shown millions of images, computers learn to tell cats and dogs apart. But using this method, even when the AI correctly identifies an image as a cat or a dog, one cannot say how the AI has done so. Decision-making based on machine learning is inherently a “black box.” AI is an opaque technology by its nature.

Different societies breed different kinds of AI. Chinese AI will certainly become increasingly clever at dazzling speed because of the huge amounts of data available to AI engineers in the country.


Marlon Technologies demonstrates its artificial intelligence capabilities in Shenzen, in 2018. China is increasingly applying AI in health care. (Photo by Yasu Ota)

According to a survey by the Japan Patent Office at the end of July, Chinese entities filed 6,858 AI patent applications in 2017, overtaking the U.S., with 5,954 filings. The U.S. has lost its long-held overwhelming edge in AI. With its patent applications doubling every year, China is on track to become an AI superpower.

Most AI patent applications made in China deal with practical machine learning; few are related to XAI. China may be first country in the world to develop black-box AI technologies for health care, autonomous driving, criminal trials, personnel decisions and other areas. But China’s approach to AI research and development, which is totally divorced from ethical considerations, inevitably raises concerns.

In April 2019, the European Commission’s High Level Expert Group on Artificial Intelligence published its “Ethics Guidelines for Trustworthy AI,” which establish seven key principles, including transparency, diversity, nondiscrimination and fairness, and accountability.

In Japan and the U.S., a raft of big companies including Google, Sony and Fujitsu have established their own ethical standards for AI because AI businesses will not be viable without the trust of users.

Hirokazu Anai, senior director of Artificial Intelligence Lab at Fujitsu Laboratories, once said, “We have to prove to our customers that our AI is safe enough and trustworthy.”

In China, a professional committee operating under the Ministry of Science and Technology in June issued a set of eight principles for next-generation AI governance. The principles include respect for privacy, but it is hard to give much credence to them, given China’s authoritarian government.

In a nutshell, a society that places little value on human dignity cannot produce human-centric AI technology.

Read More

tianze.zhang@graduateinstitute.ch