Navigating The Risks Of AI Weaponization

Brian Neely is CIO and CISO of American Systems.


If you grew up during the Cold War, you probably remember the Doomsday Clock, a vivid symbol of how close humanity is to potentially destroying the world through dangerous technology. At the moment it stands at 90 seconds to midnight—as close as it has ever been since it was initially published in 1947. A surprising addition to this year’s clock is the emergence of AI, and its potential for misuse and even weaponization.

As AI continues to advance and become more pervasive, so do its risks. From mass disinformation campaigns and deepfakes to fully autonomous weapon systems, the convergence of AI, technology and “bad intentions” demands priority attention. As a government contractor in the defense and intelligence sectors, I’m increasingly concerned about where this path could take us, but I’m also very optimistic that emerging legal, technical and ethics-based solutions can make an impact.

A New Battlefield is Emerging

The biggest challenge comes from the potential reckless use and weaponization of AI by advanced threat actors, from organized ideology-based groups to highly resourced nation-states. As AI becomes more pervasive and technology becomes less expensive, fringe groups will be more able to combine the two. The lower the barrier to entry, the more open the doors are to potential misuse.

Even in its infancy, AI is already being weaponized and integrated into inexpensive commercial technologies. For example, thousands of AI-enabled “swarm drones” are being considered for use in the Ukraine conflict to overwhelm key strategic Russian targets. Ukraine may end up being an inflection point for modern conflict, as it is quickly becoming a ‘proving ground’ for AI and autonomous technologies. A new era of weapons is beginning to emerge and be used as a force multiplier to conventional warfare. In Ukraine, we’re already seeing unmanned robotics fighting one another, in the skies and on the ground. Not only are we getting a glimpse into the future of warfare, but we are also beginning to see the evolution of weapon strategy, as an AI arms race with governments and tech companies collide, forging a new component to conflict, AI and data “arms dealers.”

Generative AI can also be used to effortlessly wage highly impactful misinformation campaigns—sway opinions, quickly spread disinformation for political subversion, or even ignite an emerging flashpoint. Imagine falsified images of wartime atrocities or deep-fake videos of world leaders calling for new attacks or retribution.

Unintended Consequences

With the rush to get AI onto the battlefield, short-sighted tactical advantages could lead to spiraling long-term ramifications. AI is already being used to make fully autonomous weapon systems, albeit still on a very limited scale. Eventually, these will become much more advanced and widespread, which brings the significant risk of AI going awry—at the extreme, think of Skynet from the Terminator movies.

Just like algorithmic trading technologies used by high-frequency trading firms can get out of control and create a cascading series of failures, autonomous weapon platforms could get caught in feedback loops and quickly escalate conflicts. AI-versus-AI battles are just around the corner, and if there are no “circuit breakers” in place, human or otherwise, that’s a scary scenario. Integrating safeguards into military AI systems might not be as trivial as implementing Isaac Asimov’s famous three laws of robotics, but developing reliable, effective strategies is a must.

Safeguards and Standards

Proper guardrails and new regulations for responsible AI are going to be critical as we move forward. Keeping humans in any potentially lethal decision-making loop will be critical. Key publications in 2023 are offering a promising start to implementing broader AI safeguards.

The first, an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, focuses on establishing rigorous standards for managing AI’s potential risk across a wide range of industries, including cybersecurity and national security, while harnessing its benefits. Provisions include protections against using AI to engineer biological weapons and requiring companies to report testing results to the government. The Executive Order’s very first statement—“Artificial intelligence holds extraordinary potential for both promise and peril”—embodies the stark contrast of AI’s potential for both good and bad, and the recognition of the need for AI accountability.

Foundational to the strategy will be the use of the National Institute of Standards and Technology (NIST) AI Risk Management Framework, part of the agency’s Trustworthy and Responsible AI Resource Center, which has been established to “ensure the development of safe, secure, and trustworthy AI systems.” AI is significantly different from traditional software systems and poses increased and evolving risks. AI systems learn from various sources that can change, sometimes frequently and unexpectedly, which impacts their expected behavior. AI systems are also “socio-technical” in nature, meaning that they are influenced by societal dynamics and human behavior, and can be highly influenced by computational, systemic and human biases.

On the international scale, the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy provides a promising framework for the responsible use of AI and autonomy in military contexts. As of November 2023, 47 nations had signed on to the declaration, which contains non-legally binding guidelines to ensure that military AI capabilities will be used consistent with international humanitarian law. One important provision requires disengaging or deactivating systems that exhibit unintended behavior.

If this seems overwhelming, keep in mind that we have already come together to limit the proliferation of destructive weapons through multilateral treaties like the Chemical and Biological Weapons Convention. It can be done. But it will take a united effort. We’re at the precipice of something big, along the lines of the invention of gunpowder or nuclear weapons. The more safeguards we can put in place up front, the better for all of us.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Read More