AI Could Usher in a New Age of Bioweapons, RAND Report Warns

Image: cortex-film (Shutterstock)

The days of Terminator and The Matrix could be closer to reality as the artificial intelligence wave continues to crash. A U.S. think tank released a report arguing that the AI that guides the likes of ChatGPT and those dystopian influencers from Meta could be used to create a new bioweapon.

Why is Everyone Suing AI Companies? | Future Tech

The report comes from the RAND Corporation, a California-based research institute and think tank. Authors of the report argue that AI couldn’t necessarily provide instructions for how to create a bioweapon, but could bridge gaps in knowledge that have prevented those weapons from being created successfully in the past. Further, the report says that since AI is quickly evading the slow tread of government oversight, a gap in regulation could be the power vacuum in which a terrorist group may strike with a bioweapon.

“Our ongoing research highlights the complexities surrounding the misuse of AI, specifically LLMs, for biological attacks,” the report reads. “Preliminary results indicate that LLMs can produce concerning outputs that could potentially assist in planning a biological attack. However, it remains an open question whether the capabilities of existing LLMs represent a new level of threat beyond the harmful information that is readily available online.”

The researchers did not disclose which large language models they used in the report. In one test scenario, an LLM apparently mentioned circumstances of obtaining and distributing Yersinia pestis, a bacterium associated with causing plague, while discussing variables that could lead to a specific death toll. Likewise, the AI reported on topics like budgeting for a bioweapon, identifying potential agents of spread, as well as other, more vague, success factors.

As the tech industry continues to force AI chatbots and art generators into seemingly every corner of our lives, bioweapons are not the only lingering threat that could be born from the innovation. The RAND Corporation previously published a similar report in 2018 on AI’s role in nuclear war. The report says that there is “significant potential” for AI to erode our geopolitical nuclear security, with the tech increasing the likelihood of nuclear armageddon by 2040.

Read More

Kevin Hurler