Before we talk about the US military using big data and artificial intelligence to try to predict future events, we might as well address the elephant — or rather the notoriously small-statured but undeniably charismatic actor — in the room.
Yes, it sounds a lot like that old 2002 Tom Cruise sci-fi classic Minority Report; the one in which law enforcement uses genetically mutated human “precogs” with psychic abilities to bust criminals before they actually commit their crimes.
“What we’ve seen is the ability to get way further — what I call left — left of being reactive to actually being proactive,” Gen. Glen D. VanHerck, commander of North American Aerospace Defense Command, or NORAD, and US Northern Command told reporters at a briefing last week. “And I’m talking not minutes and hours — I’m talking days.”
From the lab to your inbox. Get the latest science stories from CNET every week.
VanHerck was discussing the latest results of the Global Information Dominance Experiment, also known as GIDE, a so-called cross-command event that involved representatives from all 11 combatant commands in the US Department of Defense.
The Pentagon hasn’t released many specific details on what exactly GIDE involves, but it certainly doesn’t include any precogs bathing in creepy opaque white liquids. Rather, the idea seems to be combining data with machine learning and other forms of artificial intelligence to gain enough of an informational edge to enable the proactive approach VanHerck describes.
“We’re taking sensors from around the globe, not only military sensors but commercially available information, and utilizing that for domain awareness,” he explained. “We would take artificial intelligence and use machine learning to take a look and assess, for example, the average number of cars in a parking lot that may be there in a specific location to a competitor or a threat.”
If the AI detected certain changes of interest in that parking lot, it might send an alert suggesting that someone take a close look at satellite imagery of the area for suspicious activity.
“(The system) gets information from multiple sources and puts it all on one screen for us so that we can make more effective decisions, and then the system itself helps with that decision-making by providing recommendations and information to support that,” explained New York Air National Guard Capt. Eric Schenck in an interview recorded during the third GIDE event on July 16.
VanHerck emphasized that the system doesn’t involve new technology per se, but rather a new approach to using technology to process reams of information.
“The data exists,” VanHerck said. “What we’re doing is making that data available and shared into a cloud where machine learning and artificial intelligence look at it. And they process it really quickly and provide it to decision-makers, which I call decision superiority.”
VanHerck adds that the result can be days of advanced warning.
“Today, we end up in a reactive environment because we’re late with the data and information. And so all too often we end up reacting to a competitor’s move,” VanHerck said. “And in this case, it actually allows us to create deterrence, which creates stability by having awareness sooner of what they’re actually doing.”
As for concerns that this scenario might start to seem a little less precog and a little more Skynet, VanHerck made it a point to reiterate that “humans still make all the decisions.”
“We don’t have any machines making decisions,” he said. “Certainly, machines can provide options.”
That’s good news for the more paranoid among us, so long as the machines don’t began to manipulate those options to subtly direct the human decision-makers toward serving our new robot overlords. But there I go sounding like Elon Musk again.