AI Ethics And Legal AI Experts Greatly Alarmed About Dual-Use AI Which Empowers So-Called Doctor Evil Projects, Including Possibly For AI Self-Driving Cars

Are we setting ourselves up for Dr. Evil projects via the advent of dual-use AI?

getty

Sometimes evil can readily emerge even in the midst of ardently professed good.

You might say that this is a dual-use phenomenon.

Here’s what that means. In particular, there is a rising dilemma confronting the field of AI due to the stark realization of a dual-use AI conundrum that is becoming increasingly evident. The idea is shockingly simple. You can potentially take an AI system that was devised with the utmost purity toward societally beneficial ideals and goals and with nary more than a flip of a switch turn that AI into a dreadful nightmare facing humanity.

That’s considered “dual-use” because the AI can be designed for goodness and yet the next moment it can be readily switched over into abject terror. To get this undesirable or despicable switcheroo, you don’t need to somehow completely rejigger the AI. You don’t have to spend months making immense structural changes to the AI. Sure, that might be the way sometimes to get this done, but the scariest and seemingly imperiling circumstances entail the ridiculously easy flip-a-switch instances.

You might be a bit skeptical that AI could just go from goodness to utter badness by only making a few modest changes. I realize that this sounds akin to a wild sci-fi tale. Unfortunately, it isn’t just a crazed or imaginative wave-it-away fictionally contrived possibility.

Dual-use AI is altogether possible and in fact, there are today vivid examples that exist to showcase how this can and does happen. We shall begin with perhaps one of the most famous instances widely publicized and recently covered in the respected journal called Nature. As you will soon see, the dual-use AI topic bears mightily on AI Ethics and Ethical AI, raising thorny issues encompassing both a societal morality basis and that spurs significant legal ramifications too. For my ongoing and extensive coverage of AI Ethics and legal-related AI consequences, see the link here and the link here, just to name a few.

Let’s leap into the now-famous dual-use AI instance.

Some scientists had decided to create an AI system that would seek to find new molecular compounds that could potentially pave the way toward curative drugs to solve to-date unsolved diseases and other ills. You can applaud such efforts. We certainly hope and expect that pharmaceutical companies are using the latest in AI to discover new means of medically helping humankind.

Of course, not all molecular compounds are going to be disease-beating winners. The AI has to sift through a lot of wheat to discover the chaff, as it were. For humans, having to laboriously scrutinize thousands upon thousands of molecules is time-consuming and not particularly practical. But it turns out that mega volumes of searching are pretty much no problem for today’s AI and especially when using Machine Learning (ML) and Deep Learning (DL). Setup the ML/Dl and let the computer crank away at the problem. I’ll explain more about the nature of ML/DL momentarily.

The approach used by the scientists was to essentially load the ML/DL with millions of identified molecules along with their associated bioactivity data. These can be culled from various publicly available databases. Data like this is not especially secretive or under lock and key. The general notion is to use AI to examine existent molecular compounds, find computationally identifiable patterns, and then extrapolate from those patterns to propose new molecular compounds of a beneficial outcome.

When you are hoping to find some new molecular compound that will be a desirable cure, it would be wise to also try to avoid combinations that have notably adverse results. If I find something that cures a horrific disease and meanwhile it kills the person taking the cure, this doesn’t seem like a drug that we ought to be relying upon. In short, the aim is to find those molecular compounds that have the greatest promise while simultaneously proffering the least toxicity.

I trust that this makes abundant sense.

Suppose though that we took this same AI and instead of minimizing the toxicity as a core criterion, we went the opposite direction. Switch the AI toward maximizing toxicity.

Yikes!

My bet is that you can anticipate where this is going.

The scientists doing this research on finding breakthrough molecular compounds were invited to speak at a conference that was focused on international security considerations underlying the use of weapons based on chemical, biological, and other such means. They were asked to give a presentation about how AI in the use case of drug discovery might be potentially misused for nefarious purposes.

As per their article and in their own words: “The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it” (Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins, “Dual-Use Of Artificial-Intelligence-Powered Drug Discovery,” Nature, March 2022).

It might be surprising to you that these scientists did not beforehand apparently self-realize the dangers of how their work might be twisted. This is actually more pervasive than you might imagine.

For several important reasons.

First, there is often a tendency to get mired and deeply ingrained in your life’s work. This keenly devoted preoccupation and passion make it hard to step outside of the realm and realize that the whole kit and kaboodle can be corrupted into evilness. The heads-down intensity can be advantageous, no doubt about it, but it can also blot out other objectionable considerations that are potentially worthwhile to be pondering.

Second, assuming that your intentions are of purity, it can be difficult to envision evildoers that would want to usurp what you are doing. If you spend all of your time around others that are all striving toward helping humanity, you can fall into the mental trap of thinking that everyone has a like mind (birds of a feather that flock together).

Third, those that are making use of AI can get into a kind of metaverse or virtual mode, as it were, wherein the effort is being done via a computer-based simulation and not necessarily being done in real-life (well, let me clarify). Generating lots of computer-devised theoretical drugs is not quite the same as physically making the drug. You normally have to go through quite a number of hurdles to go from the mathematically modeled version to physical manifestations. As such, you can somewhat live in a bubble or world of theoretical possibilities that do not seem equivalent to having the comparable drug in your hands and ready to be released.

Allow me a somewhat analogous comparison that might showcase this. Are you familiar with road rage that sometimes erupts on our public roadways? In some instances, people get into their own bubble while sitting at the steering wheel of their car. They begin to forget that the real world surrounds their vehicle. It is as though you are looking out the windshield at a simulation of reality, encompassing pedestrians, other cars, bicyclists, and the like.

By becoming somewhat detached from the world outside of the car, a driver can gradually find themselves getting quite upset at how those outside the vehicle are behaving. It is almost as though those other humans are merely avatars or something akin to that. Eventually, the driver gets cut off by some passing vehicle or believes they were, and they go nuts. They ram their car into other nearby vehicles. You can perhaps attribute this sometimes to their having mentally drifted from the reality of where they are (I am not saying this is why people necessarily go into road rages, just mentioning what some research has said can at times occur, see my detailed coverage on road rage at the link here).

Returning to the scientists and what occurred, here is the bottom line: “It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons” (per the above-cited Nature article). Furthermore, they make this telling point: “By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules” (per the Nature article).

How much backbreaking or arduous work was required to switch their AI into the toxicity finding mode? According to this reported indication in Wired magazine, they “simply changed a 0 to 1, reversing the platform’s end goal of toxicity” (“Widely Available AI Could Have Deadly Consequences” by Jess Craig, May 17, 2022), and made other subtle changes that seemingly did not require much effort to undertake.

The scientists had reportedly named their effort the “Dr. Evil project.”

An apt naming.

We can now do a quick list of important points that this raises about dual-use AI. That being said, I don’t want you to flip out, so please prepare yourself mentally before reading the next several points.

Think about this:

  • How many other AI systems are out there that can be switched without much difficulty into becoming evil rather than dedicated to goodness? (Answer: Lots and many more are being built and released daily).
  • Who is alerting those AI developers and scientists or researchers about the inherent dangers of devising this dual-use AI? (Answer: There is a growing push about publicizing and promulgating AI Ethics, which is one of several fronts being pursued).
  • What types of controls or limits can be pressed into service regarding dual-use AI to aid in preventing potential societal calamities? (I’ll tackle that in a moment, so stay on the edge of your seat, thanks).
  • Are there legal ways to try and address the dual-use AI, seeking legally to curtail or at least make everyone more cognizant of such practices and the many consequences that can accrue to those that undertake these efforts? (A worthy topic that I’ll also be discussing herein).

Before getting into some more meat and potatoes about the wild and woolly considerations underlying dual-use AI, let’s establish some additional fundamentals on profoundly integral topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s now return to the topic of dual-use AI.

Here are my handy-dandy nine rules of thumb about the nefarious and endangering facets of dual-use AI:

1) Reversal of max-min (max-good + min-bad) into becoming min-max (min-good + max-bad). A typical dual-use AI consists of having the AI devised to maximize some given factor while minimizing some other given factor. Furthermore, the max is considered as good and the min is considered as bad, at the outset. To effectuate the dual-use into evildoing, the idea is that we are able to readily switch the max-min into becoming min-max, whereby we opt to minimize the good and maximize the bad (note that we are using the nomenclature of “max-min” and “min-max” in an unconventional manner, doing so for sake of convenience in this particular use case). Things are not always done this way for dual-use AI and realize that this is merely a general rule of thumb.

2) Those devising such dual-use AI are often unaware of the reversal potential and its repercussions. As earlier mentioned, the dual-use adverse consequences are not typically at front of the mind of those devising such AI. They might not at all imagine that their AI could be reversed into something of an unsavory or wrongdoing intent. This suggests the importance of raising such awareness, plus instituting other pertinent safeguards to aid in guarding against the dangers of the reversal being undertaken.

3) Dual-use AI is more likely to be based on the use of AI-based on Machine Learning or Deep Learning. Because of the nature of ML/DL as computational pattern matching techniques and technologies, the setup of ML/DL is generally more prone to inoculating a dual-use reversal possibility than other means of devising AI (all else being equal). Given the continued expansive adoption of ML/DL and the overall ease in which ML/DL is established, the chances of dual-use AI are undoubtedly going to keep rising.

4) Sleeper dual-use AI can be lying in wait. You might be familiar with the notion of sleeper cells in the spying gambit. In a somewhat similar fashion, a dual-use AI can be put into use as the good-oriented max-min and exist that way for an indefinite period. It could be months or years before someone dastardly realizes and opts to do the reversal into being min-max. The potential sits otherwise untapped, though perhaps forebodingly so.

5) Dual-use AI can be demonstrably hidden within multi-use AI. We are somewhat simplifying the AI into being characterized as dual-use only. The odds are that there are lots of other internal factors of the AI, thus we might refer to this as more properly being construed as multi-use AI. Of the multitude of internal factors, we can at times pluck out two that are of this combination that perchance falls into the dual-use dilemma. Many such combinations could be potentially identified.

6) Dual-use AI can consist of merely one factor that is switched from max to min. Rather than having two factors, there is the other possibility of having AI that is devised principally on one factor, which is presumably a goodness factor. The AI is set up to maximize that one “good” factor. The reversal then is even simpler since the AI is recalibrated to minimize the “good” factor and ergo potentially generate adverse possibilities. For example, maximize on the factor of “safety” is switched instead to minimize the factor of “safety” (the outcome would seem to be that the least safe would be given the highest attention).

7) Need for promulgating awareness about the dangers of dual-use AI. We need to make sure that when conveying the vital aspects of AI Ethics and Ethical AI, the dual-use AI concern is also raised. This is admittedly just one of many considerations under the rubric of AI Ethics and Ethical AI, though due to the potential of harm or adverse consequences this does seem worthy of especially warning about.

8) Implementation of technological means of limiting or prohibiting dual-use AI. There are technological ways to try and limit what dual-use AI can do, possibly even prohibiting it. But this is easier said than done. There is no silver bullet of a technological nature per se. I mention this because some believe a straightforward legal means entails enacting laws that cover dual-use AI, though this is a lot more complex and thornier than seems at a cursory glance, see my columns for further details.

9) Do not kill the golden goose by overstepping on dual-use AI crackdowns. As with most AI, each time that concern is identified there can be a knee-jerk response to rectify the qualm by potentially tossing the baby out with the bathwater. We need to keep in mind that AI innovations provide a lot of societal benefits. Efforts of any kind to excise dual-use AI, such as putting in place onerous laws, could essentially kill the golden goose. A suitable balance needs to be undertaken.

In my last point noted above, one could cheekily perhaps say that we are aiming to maximize the good of dual-use AI and minimize the bad of dual-use AI.

Any dual-use AI that begins with good intentions is what we want to presumably keep going. Meanwhile, we need to be watchful of and try to control or contend with the reversal. In like manner, we need to be mindful of those that would seek to maximize the bad of dual-use AI and minimize the good of dual-use AI.

That is a mind-bending form of recursive thinking, for sure.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the dual-use AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Dual-Use AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

We shall begin by heaping praise upon the use of ML/DL in the realm of bringing forth AI-based self-driving cars. Several key aspects of self-driving cars have come to fruition as a result of using Machine Learning and Deep Learning. For example, consider the core requirement of having to detect and analyze the driving scene that surrounds an AI-based self-driving car.

You’ve undoubtedly noticed that most self-driving cars have a myriad of mounted sensors on the autonomous vehicle. This is often done on the rooftop of the self-driving car. Sensor devices such as video cameras, LIDAR
AR
units, radar units, ultrasonic detectors, and the like are typically included on a rooftop rack or possibly affixed to the car top or sides of the vehicle. The array of sensors is intended to electronically collect data that can be used to figure out what exists in the driving scene.

The sensors collect data and feed the digitized data to onboard computers. Those computers can be a combination of general-purpose computing processors and specialized processors that are devised specifically to analyze sensory data. By and large, most of the sensory data computational analysis is undertaken by ML/DL that has been crafted for this purpose and is running on the vehicle’s onboard computing platforms.

The ML/DL computationally tries to find patterns in the data such as where the roadway is, where pedestrians are, where other nearby cars are, and so on. All of this is crucial to being able to have the self-driving car proceed ahead. Without the ML/DL performing the driving scene analysis, the self-driving car would be essentially blind as to what exists around the autonomous vehicle.

In brief, you can readily make the case that the use of ML/DL is essential to the emergence of AI-based self-driving cars. It all seems a heralded matter. But that is until you start to look into some of the details and possibilities of what the future might hold.

In a simplified manner, you could suggest that the AI driving system is devised to do this:

  • Driving safely: Maximize
  • Causing harm: Minimize

We all want self-driving cars to drive safely. They are to avoid getting into car crashes. They are to keep from ramming bicyclists. They aren’t supposed to be running into pedestrians. This is an obvious assumption. As an aside, there are some pundits that falsely and egregiously claim that AI-based self-driving cars will be entirely uncrashable and never get into car crashes or collisions of any kind. I have debunked such outrageous claims, see my discussion at the link here.

All told, we want AI self-driving cars that maximize the act of driving safely.

I have persistently and persuasively argued that we cannot expect the AI to somehow miraculously drive at all times perfectly. If a nearby human driver suddenly and without any warning opts to veer into the side or rear of an AI self-driving car, there really isn’t anything the AI driving system can do to avert the car crash that ensues. At that juncture, we would hope that the AI will at least attempt to minimize the harm that such a car crash might produce. I have discussed this consideration at length, particularly in showcasing the applicability of the famous (or infamous) Trolley Problem to the advent of self-driving cars, see my elaboration at the link here.

You could clearly state that we want AI self-driving cars to minimize harm over the course of a driving journey.

The AI systems for self-driving cars are usually devised in this manner of maximizing safety and minimizing harm. I want to emphasize that such a proposition is a grand simplification since the AI is very complex and more akin to the multi-use AI that I referred to earlier herein. In any case, let’s go with the flow for sake of tying the dual-use AI concepts into the realm of self-driving cars and autonomous vehicles.

Consider the implications if we could in a dual-use AI fashion alter the AI driving system to do this:

  • Driving safely: Minimize
  • Causing harm: Maximize

Let that percolate in your mind for a moment or two.

An evildoer decides to do a reversal of the good and the bad. They change the desire to maximize driving safely into becoming a goal of minimizing driving safely. They change the original aim of keeping harm to a minimum to instead be one of maximizing harm.

In theory, an AI self-driving car so commanded would attempt to ram into and crash with nearly anything and everything within its grasp to do so. The AI is merely doing as it was “devised” to do in this reversal setting. This would be disastrous.

Some of you might be thinking that just because one AI self-driving car perchance gets into a dual-use AI reversal mode and aims to maximize harm and minimize driving safely would not seem like much of a problem per se. I say that because once the AI self-driving car smashes into something such as a truck or whatever, the vehicle itself is likely to be so damaged that it can no longer actively be directed by the AI to carry out any further chaos and destruction. It is dead in the water, so to speak.

Well, consider the scaling factor involved.

We are only right now witnessing drips and drabs of self-driving cars on our public roadways. There are currently about 250 million or so human-drivable cars in the United States (see my stats here). If you assume that we will ultimately seek to replace human-drivable cars with self-driving cars, this implies that we might end up with say 250 million or so self-driving cars on our roadways. Some suggest that we won’t need as many self-driving cars as we have human-driven cars because the AI will be able to drive 24×7 and some number fewer self-driving cars will suffice for our transportation needs. Of course, a counterargument is that we are going to open wide the desire for mobility and thus we might need as many or more self-driving cars to cope with the massive increases via induced demand.

The AI of self-driving cars will customarily be updated via OTA (Over-The-Air) as electronically distributed software updates. Many have gushed about the advantages of using OTA. When a software update is needed, you won’t have to take an AI self-driving car into a car repair shop or dealership. The OTA can pretty much be done wherever the self-driving car happens to be (within limitations).

I am walking you step by step on the matter of how adverse dual-use AI reversal could potentially occur on a massively large scale.

Via OTA, the AI updates for various AI self-driving cars are pushed out to the (by then) ubiquitous autonomous vehicles. This atrociously adjusted AI is aiming to maximize harm and minimize driving safely. Shockingly, we suddenly find ourselves faced with a kind of revolt or rebellious sea of AI self-driving cars. Imagine the widespread damage and despair that this could bring forth.

I am not suggesting that this would be easy to pull off. You would need to somehow do the AI reversal across a rather complex and disparate set of differing and at times proprietary software systems. You would need to get this poisoned or adverse AI into a posture whereby the OTA is willing to download it into the self-driving cars. In addition, the various cybersecurity protections onboard the vehicle would need to allow the newly loaded AI to act out. And so on.

It does seem nearly impossible, though at the same time we cannot rule out the possibility in its entirety. I have discussed how a rogue nation-state could try to do this heinous act, see my discussion at the link here. In addition, for my details about how this kind of malicious takeover of AI self-driving cars could be performed, see my coverage at the link here.

Conclusion

There are lots of untoward ways to make for bedlam and dangers due to the reversal of dual-use AI.

Envision an AI system that was devised to maximize fairness and minimize bias. That is what AI Ethics implores AI developers to do. It could be that with the flipping of a switch, we get the AI to maximize bias and minimize fairness. Ugly.

A financial app that relies on AI might maximize profitability and minimize monetary losses as a set goal. Let’s switch that around. The AI now seeks to maximize monetary losses and minimize profitability. You could see your savings wiped out or the stock market take an enormous dive.

I hope you can see how this issue can be found in just about any AI system.

Do not though take this as the sky is falling kind of portrayal. The gist overall is that we all need to be aware of how AI can be switched around like this. It can be a lot easier than many would assume. The first thought by many is that to switch AI into the adverse mode would require all manner of arcane and arduous AI coding trickery and take eons to try and perfect.

The irony of sorts is that the same easy-to-use tools that we are seeing appear these days to devise AI and especially Machine Learning and Deep Learning are making the dual-use reversal easier and easier to do. We need to stay on top of this and make sure that those crafting AI systems and those making use of AI are aware of the inherent dangers lurking within.

A final remark for now.

You might recall from the popular Austin Powers movies the notable line when Dr. Evil shrugs his shoulders and laments that he might as well do what he always does, namely hijack some nuclear weapons and hold the world hostage. All in just a day’s work, apparently.

Let’s keep Dr. Evil and others from saying the same about AI, such as shrugging their shoulders and muttering that they might as well find some dual-use AI that can merely flip a switch and reverse into undercutting humankind.

Not good and we need to prevent or inhibit Dr. Evils from such evildoing acts.

Read More