Now that science fiction is reality, it’s time for new laws of robotics

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The laws were not simply a plot device or medley of mechanical morals. Rather, they signaled a utopian belief that rational robots could faithfully serve unpredictable humans. Asimov was sick of cautionary tales of technology run amok, like “Frankenstein.” For him, a sophisticated storyteller needed to focus on engineering, not emotion; facts, not fate. Commenting on his own fiction, he proclaimed:

Never, never was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. Nonsense! My robots were machines designed by engineers, not pseudo-men created by blasphemers. My robots reacted along the rational lines that existed in their “brains” from the moment of construction.

This vision flattered technologists, who returned the favor. Asimov’s laws became a starting point for discussions of the ethics of AI law and policy in everything from movie dialogue to academic papers.

With their nested structure reminiscent of computer code, Asimov’s laws give the impression that artificial intelligence could automatically encompass morality. But things aren’t so simple, as the complicated plots of Asimov’s stories themselves show.

One problem is that the laws are not as clear-cut as they might seem. Can an autonomous drone blast a terrorist cell that it locates? The first half of the first law (“A robot may not injure a human being”) seems to forbid such an action. But a soldier might quickly invoke the second half of the first law (forbidding “inaction” that would “allow a human being to come to harm”).

The ambiguities do not stop at the battlefield. Consider, for instance, whether Asimov’s laws allow robotic cars. Self-driving vehicles promise to eliminate many thousands of traffic fatalities each year. On the other hand, they would also put hundreds of thousands of paid drivers out of work. Does that harm entitle governments to slow the adoption of self-driving cars? Or do only physical injuries matter?

Another problem is that Asimov’s laws don’t ensure that most human beings — not just investors and engineers — have some say in how AI and robots are created and what goals they pursue. These are legal and political questions that require the involvement of government regulators.

Avoiding the worst outcomes in the artificial intelligence revolution while capitalizing on its potential will depend on a wise balance between human insight and computational power in our daily lives. At present, AI and robotics most often enhance, rather than replace, human labor, empowering people rather than rendering them redundant. Things can stay that way if our institutions of governance follow concrete principles that recognize the value of quintessentially human work and the dangers of robots run amok.

With those ends in mind, here are four new laws of robotics.

Robotic systems and AI should complement professionals, not replace them.

Robots have revolutionized manufacturing — and should keep doing so. They’ve freed humans from difficult, dirty, and dangerous jobs in agriculture and logistics. These celebrated advances fuel dreams of self-driving cars and delivery drones. Some experts extrapolate that almost every job is destined to be whittled away by technological advance. But that outcome is one we should strive to avoid in many areas.

Robotic meatcutters relieve workers of dangerous work; robotic day care gives us pause. The latter caution isn’t mere Luddite reaction. Rather, it reflects a deeper wisdom about the nature of childhood. School time is as much about learning how to work and play with others as it is about acquiring knowledge. Medical licensing rules now keep symptom-analyzing apps from being marketed as physicians practicing medicine. That is a good policy: Many apps are dangerously unvetted, and even if they were all certified to be effective, professional medical advice can be invaluable.

No doubt the boosters of robot caregivers will peddle studies “proving” no worse results for their wards. We do well to suspect such studies, as data about such outcomes is so easy to manipulate. More importantly, narrowly defined “results” of labor are only one facet of a complex policy picture. Meaningful work is critical to the self-worth of people and the governance of communities.

A humane agenda for automation would prioritize innovations that complement workers in jobs that are, or ought to be, fulfilling vocations. It would substitute machines to do dangerous work, while ensuring that those now doing those jobs are fairly compensated for their labor and offered a transition to other social roles.

Sometimes it will be difficult to demonstrate that a human-centered process is better than an automated one. Crude monetary metrics crowd out complex critical standards. For example, machine learning programs may soon predict, based on brute-force natural language processing, whether one book proposal is more likely than another to yield a best seller. From a purely economic perspective, such programs may be better than editors or directors at picking manuscripts or film scripts. Nevertheless, those in creative industries should stand up for their connoisseurship. Editors have an important role infinding and promoting work that the public may not (now) want, but needs. The same could be said of journalists. Even if automated text generation could generate ad-maximizing copy, that hollow triumph should never replace genuine reporting from an authentic, hard-won, human point of view.

Even in fields that seem most subject to the automation imperative — fields like logistics, cleaning, agriculture, and mining — workers will play a critical role in a long transition to AI and robotics. And regulations can make their lives better. For example, European law empowers drivers to resist the kind of 360-degree surveillance — including constant video and audio recording — that oppresses truckers in the United States. That is not to say that such a dangerous occupation should go unmonitored. Machine vision may spot problems with a driver’s reflexes. But there is a world of difference between sensors specifically aimed at safety lapses and the unrelenting gaze of a pair of robotic eyes. Workers should have a say in the technologies they use daily — rather than simply being bossed around by or through them.

There are many examples of technologies that make jobs more productive, more rewarding, or both. Contemporary law students can barely believe that pre-Internet lawyers had to comb through dusty tomes to assess the validity of a case; research software makes that process easier and vastly expands the range of resources available for an argument. Spending less time hunting down books and more time doing the intellectual work of synthesizing cases is a net plus for attorneys.

Automation can bring similar efficiencies to myriad other workers, without mass displacement of labor. This is not merely an observation; it is a proper goal of policy. Without such commitments, we risk an AI explosion replacing more and more people with mechanical simulations.

That dark prospect necessitates a second new law of robotics.

Robotic systems and AI should not counterfeit humanity.

From Asimov’s time to the vertiginous mimicry of “Westworld,” the prospect of humanoid robots has been fascinating, frightening, and titillating. Some roboticists aspire to find the right mix of metal bones and plastic skin that can break out of the “uncanny valley” — the queasiness induced by a humanoid robot that comes only close to recreating human features and gestures. Machine-learning programs already can create pictures of “fake people,” and convincing synthetic voices may soon become common. As engineers scramble to fine-tune these algorithms, a larger question goes unasked: Do we want to live in a world where human beings do not know whether they are dealing with a fellow human or a machine?

Leading European ethicists have argued for limits on programs that lead people to believe they are dealing with human beings. Lawmakers in California have already banned the undisclosed use of bots in commercial and political communications. Despite this growing ethical consensus, there are subfields of AI — such as affective computing, which analyzes and simulates human emotion — devoted to making it more and more difficult for us to distinguish between humans and machines. These research projects might culminate in a creation like the advanced androids in the Steven Spielberg film “A.I.,” indistinguishable from human beings.

Ethicists debate how such humanoid robots should be designed. But what if they should not be made at all? The race to mimic humanity might too easily become a prelude to replacing it.

The voice or face of another human being demands respect and concern; machines have no such claim on our conscience. When chatbots fool the unwary into thinking that they are interacting with humans, their programmers act as counterfeiters, falsifying features of actual human existence to increase the status of their machines. When the counterfeiting of money reaches a critical mass, genuine currency loses value. Much the same fate lies in store for human relationships in societies that allow machines to freely mimic the emotions, speech, and appearance of humans. Indeed, if we let deceptive robots (in real space) and bots (online) proceed unimpeded, they may compete for attention far more effectively than some persons.

That dystopian possibility is one of many rationales for a third new law of robotics.

Robotic systems and AI should not intensify zero-sum arms races.

A global coalition of civil society organizations is pushing nations to pledge not to develop “killer robots,” also known as lethal autonomous weapons systems, or LAWS. Several factors now stymie this commendable proposal for technological restraint. Military leaders distrust their counterparts in rival countries. They may hide militarized AI research even as they publicly disclaim any such intent. This is but one of many ways an arms race begins.

As AI and robotics come into the military’s view, the stakes of falling behind one’s rivals rise, since emerging technologies promise to be so much more targeted, ubiquitous, and rapidly deployed than present-day ones. Dovish politicians may commit themselves to a purely defensive posture. But defenses can often be repurposed as offensive weapons. Think of autonomous drones designed to destroy missiles but reprogrammed to assassinate generals. Even protective plans can seem aggressive, as in the case of Ronald Reagan’s Strategic Defense Initiative, otherwise known as Star Wars. It is much easier to bomb an enemy if you know you can protect yourself from its bombs. Automated cyberattacks and disinformation campaigns threaten to disrupt long-settled expectations about the purpose and limits of international conflict.

This third new law of robotics has applications in many other arenas as well. Machines increasingly judge humans who are looking for work or loans. Proctoring software watches students taking tests; the next step is to measure their engagement in the classroom or online by constantly scanning their faces. Some Chinese schools have even deployed an AI system to evaluate students’ faces second by second for engagement, concentration, and emotional state. Anxious to monitor remote students, university administrators in the United States are also implementing invasive software. Owners of these tools are likely to demand more and more opportunities to collect data, in order to develop correlations between, say, facial expression and grades, or social media activity and creditworthiness.

We need to find ways of limiting these dynamics, lest we all be forced into a competition to reveal as much of ourselves as possible to win over algorithmic overlords. One easy step forward would be to ban such cameras in most classrooms, or to require lenders to not base their decisions on social media activity. Weakened during the Trump years, the US administrative state has many opportunities to prove its worth by protecting us from a future better left to “Black Mirror” episodes.

Human security and creativity thrive in environments that balance predictability and openness, order and flux. AI and robotics threaten to make social control too perfect, and competition — to be the entity capable of imposing or evading that control — too fierce. If we fail to limit robotics embedded in systems of social control, that balance will be upended.

One way to ensure those limits are durable and effective is to make sure we know exactly who controls and benefits from any robotic system or AI we encounter — which in turn leads to a fourth and final new law of robotics.

Robotic systems and AI must always indicate the identity of their creators, controllers, and owners.

If every car needs a license plate, every drone should have its digital equivalent — especially when such machines can be programmed to hover right next to your window to watch you. Given all the damage robots can impose, regulators will need to require that identifiable persons are responsible for them.

True, some programs can now generate new programs, which in turn may spawn others. But we can still trace these “mind children” and grandchildren back to their source. We should keep this status quo for the foreseeable future, despite predictable resistance from some advocates of fully autonomous AI.

Some robots and algorithms will evolve away from the ideals programmed into them by their owners, as a result of interactions with other persons and machines. Think, for instance, of advanced self-driving cars that learn new behaviors on the road. In such cases, there may be multiple potentially responsible parties for any given machine’s development and eventual actions. Whatever affects the evolution of such machines, the original creator should be obliged to build in certain constraints on the code’s evolution to both record influences and prevent bad outcomes. Once another person or entity hacks or disables those constraints, the hacker would be responsible for the robot’s wrongdoing.

Technologists may dismiss these four new laws by saying it is too early to regulate robotics. Let problems develop and only then move to counter them, say the partisans of laissez-faire. But quietism misses the mark. All too often in high-technology fields, industry says it is never a good time to regulate. When troubling new business practices emerge, would-be regulators are accused of strangling an “infant industry.” Once the practices are widespread, the very fact of their ubiquity is offered as proof that consumers accept them.

A wait-and-see attitude ignores the ways in which technology, far from being independent of our values, comes to shape them. We can uphold a culture of complementing human beings rather than replacing them. We can attain and afford a world ruled by persons, not machines. The future of AI can be inclusive and democratic, reflecting the efforts and hopes of all citizens. And the new laws of robotics can guide us on this journey.

Frank Pasquale, a professor at Brooklyn Law School, is the author of “New Laws of Robotics: Defending Human Expertise in the Age of AI,” from which this essay is adapted. Follow him on Twitter @FrankPasquale.

Read More

Frank Pasquale