By Jack Morse
Of course a little known facial recognition tool was used on Black Lives Matter protestors.
This past June, as protestors were tear-gassed in Washington D.C.’s Lafayette Square so that Donald Trump could have a bible-thumping photo op, officials claim a man assaulted a police officer. The man, Michael Joseph Peterson Jr., wasn’t arrested at the scene. Instead, police pulled images off Twitter and ran them through a previously secretive facial recognition system to find a match.
So reports the Washington Post, which notes that many experts believe this is the first time a defendant has been told the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS), as it is called, was used to track them down. This, despite the fact that the NCRFRILS has reportedly been used over 12,000 times.
Importantly, however, this is not the first time that facial recognition technology has been used on Black Lives Matter protestors. The New York Police Department used the tech to find and harass a man accused of allegedly shouting into an officer’s ear with a megaphone.
Use of the technology is controversial for numerous reasons. Notably, it’s led to the arrest of at least two innocent black men (that we know about). What’s more, it has been shown to misidentify the elderly, young, women, and people of color at higher rates than white men.
In addition to the obvious problems that might result from an error-prone surveillance technology wielded by sometimes violent police forces around the U.S., the secondary effects of facial recognition are their own disaster in the making. For example, as the coronavirus pandemic led to mostly widespread public mask wearing in the U.S., the Department of Homeland Security warned law enforcement around the country that criminals would use masks to avoid facial recognition.
“We assess violent extremists and other criminals who have historically maintained an interest in avoiding face recognition,” cautioned a leaked DHS note, “are likely to opportunistically seize upon public safety measures recommending the wearing of face masks to hinder the effectiveness of face recognition systems in public spaces by security partners.”
Essentially, in the eyes of DHS, facial recognition’s sometime inability to identify masked protestors is its own justification to view those protestors with suspicion. This matters because officials often defend their use of the tech by saying it’s only deployed when they believe a crime has been committed.
In fact, that is exactly what officials told the Washington Post when pressed about the NCRFRILS’s use on Lafayette Square protestors.
“[Fairfax County Police Major Christian Quinn] said the system is never used to gather intelligence on peaceful demonstrations,” writes the Post, “but it was employed in the Lafayette Square case because the protester had allegedly committed crimes.”
Elected officials around the country have, albeit slowly, begun to recognize the potential danger that facial recognition technology poses to First Amendment rights. Portland, San Francisco, and Boston have all banned use of the technology to varying degrees.
Perhaps the behind-the-scenes use of NCRFRILS to arrest a protestor will encourage other cities to follow suit. In the meantime, mask up.