On Tuesday, a horrific but familiar story unfolded: a disturbed 18-year-old had traveled to Robb Elementary School in Uvalde, Texas, where he used a legally purchased assault rifle to murder 21 people: 19 children and two teachers. Before the dust had settled over the Texas border town, the conversation turned to the prevention of future shootings. Schools across Texas quickly promised increased security and new protective measures.
But how do you protect against something that often seems as pitiless and arbitrary as a bolt of lightning? For years, some have insisted that the
best strategy is to adopt new security measures and invest in emergent surveillance technologies—the hope being that new products paired with hyper-vigilance will identify and stop the next shooter before he pulls the trigger.
The Uvalde Consolidated Independent School District, of which Robb is a member, followed this conventional wisdom and embraced modern security solutions at its schools. Indeed, the district had actually doubled its security budget over the past several years to invest in a variety of recommended precautions.
According to UCISD’s security page, the district employed a safety management system from security vendor Raptor Technologies, designed to monitor school visitors and screen for dangerous individuals. It also used a social media monitoring solution, Social Sentinel, that sifted through children’s online lives to scan for signs of violent or suicidal ideation. Students could download an anti-bullying app (the STOP!T app) to report abusive peers, and an online portal at ucisd.net allowed parents and community members to submit reports of troubling behavior to administrators for further investigation. As has been noted, UCISD also had its own police force, developed significant ties to the local police department, and had an emergency response plan. It even deployed “Threat Assessment Teams” that were scheduled to meet regularly to “identify, evaluate, classify and address threats or potential threats to school security.”
And yet, none of the new security measures seemed to matter much when a disturbed young man brought a legally purchased weapon to Robb and committed the deadliest school shooting in the state’s history. The perpetrator wasn’t a student and therefore couldn’t be monitored by its security systems.
G/O Media may get a commission
Apple AirPods Max
Experience Next-Level Sound
Spatial audio with dynamic head tracking provides theater-like sound that surrounds you
UCISD didn’t adopt its new measures in a vacuum. The district implemented them not long after a 2018 shooting in Santa Fe, Texas that killed eight high school students and two teachers. In the wake of the massacre, Gov. Greg Abbott passed new legislation and published a 40-page list of recommendations to enhance school safety. The list, among other things, included using technology to “prevent attacks.” The governor also recommended increasing the number of police officers at schools, deepening ties between local law enforcement and school districts, and providing better mental health resources for students.
But during a press conference Wednesday, Steve McGraw, director of the Texas Department of Public Safety admitted that security measures had failed to offer the protections they were supposed to: “Obviously, this is a situation where we failed in the sense that we didn’t prevent this mass attack,” he said.
Whether outfitting America’s schools like miniature fortresses actually helps to stop shootings is anything but clear. One thing’s for sure, though: there’s no shortage of companies out there that believe their products will make the world a safer place.
Of the many solutions that have been sold to schools as risk mitigators, social media monitoring has become one of the most common. Trolling through students’ online lives to look for signs of danger is now a routine procedure in many districts. In fact, legislators have discussed mandating such surveillance features for schools across the country. UCISD employed one such company, but Gov. Abbott said Wednesday that “there was no meaningful forewarning of this crime.” The shooter sent private messages threatening the attack via Facebook Messenger half an hour before it occurred, but they were private and therefore would have been invisible to outside observers.
Facial recognition is another technology that has been offered to schools as a basic safety mechanism. The number of schools that have adopted face recording solutions has risen precipitously in recent years (Clearview AI announced this week that it has its sights on cracking into the market). However, despite their growing popularity, there is little evidence that these security apparatuses actually do anything to stop school shootings. Even supporters of facial recognition admit that the systems probably won’t do much once a shooter’s on school property.
Covert weapons scanners are also on the rise. Such devices can be quietly installed on campuses to scan entire crowds for signs of firearms or weaponry, according to the companies that make them. These businesses have explicitly courted schools and promised that their products can identify weapons before they become active threats. Whether they’re correct—and what the privacy tradeoffs of surreptitious scans are—remains to be seen. In the case of the Uvalde shooting, it’s hard to see how a weapons scanner would’ve actually prevented anything.
If security buffs are keen on all of this stuff, privacy advocates look at the current trends as well-intentioned if ultimately misguided attempts to solve a much more complicated problem.
“Whether it’s facial recognition, monitoring software on school devices, cameras—all these types of surveillance have become extremely ubiquitous,” said Jason Kelley, digital strategist with the Electronic Frontier Foundation, in an interview with Gizmodo. “The companies that sell these tools are trying to do something positive—they’re trying to minimize tragedy,” he said. Yet not only can these products ultimately be ineffective, they can also end up having negative side-effects on the children they’re meant to protect, Kelley offered. The intrusiveness of the tools are such that students may grow up feeling as if they have to be surveilled to be safe—even if the surveillance isn’t actually keeping them safe.
Some studies suggest that what surveillance actually provides is punishment rather than protection. The cameras and software can turn schools into little panopticons, where student behavior is constantly analyzed and assessed, and where minor infractions can be spotted and disciplined. But if the systems are good at providing internal regulation to the institutions that deploy them, the question remains: are they also good at keeping kids safe? And can an algorithm or a new scanner really see something that often feels totally invisible to the naked eye?