Several of the nation’s largest cities rely on federal tax dollars to fund the development of software promising to predict future crime. So it’s been for nearly a decade. For the first time, however, the agency responsible for distributing those funds has acknowledged having little clue when it comes to the ways that money is used.
While entrusted with overseeing grants for state and local law enforcement agencies, Justice Department officials have kept no “specific records,” a senior official has said, regarding which agencies have tapped a leading source of DOJ funding to purchase predictive policing tools. The admission follows an inquiry by a group of Democratic lawmakers in April 2020, seeking a full list of police departments spending federal grant money on technology alleging to forecast crime. The DOJ’s response, obtained exclusively by Gizmodo, not only fails to provide a full accounting of these funds, but ignores many of the basic questions posed by federal lawmakers.
To what degree such tool are, or have ever been, assessed for compliance with civil rights laws remains unclear, for example. In a letter from the department signed earlier this year, demands for these specifics were largely met with ambiguity. At least one U.S. senator is now expressing outrage over the gaps in the government’s knowledge concerning its work on the taxpayer’s dime.
“If the Justice Department doesn’t have better answers than this, Congress should debate whether these programs should be allowed at all, let alone funded by taxpayers,” Sen. Ron Wyden, Democrat of Oregon, told Gizmodo. Still pressing for answers, the senator has been asking for a meeting at the DOJ since January. He’s gotten no response so far.
Wyden’s remarks come four months after Gizmodo and The Markup concluded a more than year-long investigation into a California-based predictive policing firm called PredPol. The investigation was spurred by the discovery of more than 7 million crime predictions in the summer of 2020 on an unsecured Amazon server. The predictions covered dozens of U.S. cities, impacting as many as one in 33 U.S. residents. And while limitations in available crime data hindered analysis of PredPol’s true impact on crime, the research revealed a product that, in a majority of cities, was targeting mainly Black and Latino areas. In a majority of jurisdictions for which data was available, the poorest residents of those areas were also overwhelmingly—sometimes relentlessly—targeted. More plainly, the investigation found that the fewer White residents living in an area, the more likely it was PredPol would predict crime there. (PredPol CEO Brian MacDonald disputed the findings, claiming, without explanation, that its own prediction data was “erroneous” and “incomplete.”)
Predictive policing tools—which rely on historical crime data fed to algorithms designed by, among others, companies like Oracle and IBM—are increasingly automating decisions around where police departments are focusing their patrols. Such products have been used to label not only specific neighborhoods as “hot spots” for crime, but specific people as likely suspects in crimes that, at the time, have yet to be committed.
In April 2020, Wyden and other Democrats notified U.S. Attorney General Merrick Garland that they’d grown “deeply concerned” about the unchecked expansion of predictive policing across the country. They sought information about the department’s role, and—setting a deadline for approximately a month and a half later—attached a list of relevant questions. Nine months later, when a response finally did arrive, they found that most of them had gone unanswered.
In their letter to Garland, the lawmakers requested basic facts about the DOJ’s funding of AI-driven software. They sought to learn, for example, which state and local agencies specifically had adopted or furthered research into predictive policing tools with the DOJ’s assistance. They sought to learn whether the DOJ had any rules designed to ensure such tools are “tested for efficacy, validity, reliability, and bias”.
The letter responding to that inquiry, signed by Acting Assistant Attorney General Peter S. Hyun, begins by vaguely acknowledging that the nationwide use of predictive policing has given rise to “complex questions.” While Hyun claimed that the government remains “steadfastly committed” to safeguarding Americans’ civil rights when it comes to forecasting crime, those assurances have failed to impress Wyden, the top privacy hawk on Capitol Hill and chair of the Senate Finance Committee.
According to the response from Assistant AG Hyun, federal funding for predictive policing has principally come from two sources. One, known as the Edward Byrne Memorial Justice Assistance Grant Program (JAG)—named for a New York City police officer murdered in 1988—appears to disburse grants under conditions far less than stringent than the second. JAG, the nation’s “leading source” of criminal justice funding, according to the DOJ, is not actively keeping track of whether funds are being used for this purpose.
“BJA does not have specific records,” Hyun said, “that confirm the exact number of grantees and subgrantees within the Edward Byrne Memorial Justice Assistance Grant (JAG) Formula Program that use predictive policing.”
Despite its nescient practices, at least some JAG funding has been spent in this area. In trying to address the lawmakers’ concerns, BJA managed to identify at least five U.S. cities that have applied grants towards predictive policing programs. Those include Bellingham, Washington; Temple and Ocala, Florida; and Alhambra and Fremont, California. In the case of Temple, Hyun wrote, JAG funding was spent on a predictive program intended to “identify targets for police intervention.”
In the five cities, the grant figures ranged from $12,805 to $774,808, with Hyun describing the latter as being applied toward a “predictive analytics software solution,” which he called “PEN Registers.” (It is not immediately clear whether this is actually the name of a real predictive policing tool; “pen register” is the name of a police device used to identify phone numbers called from a particular phone.)
Unlike JAG grants, the second source of funding identified by Hyun—a competitive grant program run by the Bureau of Justice Assistance (BJA), known as the Smart Policing Initiative (SPI)—reportedly holds recipients to a higher level of account. Stipulations on funding are said to include, for instance, checks on whether approved projects actually achieve their intended results. SPI-funded projects, which have included predictive policing initiatives in Los Angeles, Chicago, and Baton Rogue, are further evaluated, Hyun said, by researchers tasked with gauging their impact on civil rights.
In an email, Wyden said that congressional members’ inability to obtain more information about the government’s activities should necessitate further response. It may even be time, he suggested, for Congress to weigh a ban on the technology, of which civil rights groups have long been suspicious.
“It is unfortunate,” Wyden said, “the Justice Department chose not to answer the majority of my questions about federal funding for predictive policing programs.” His inquiry to the DOJ was backed by six Democratic colleagues—Senators Ed Markey of Massachusetts, Alex Padilla of California, Raphael Warnock of Georgia, and Jeff Merkley of Oregon, as well as Representatives. Yvette Clarke of New York and Shelia Jackson Lee of Texas.
In approaching the DOJ, Wyden and his colleagues stated that they believed algorithms being deployed to help automate police decisions have suffered from a lack of meaningful oversight, and have developed a reputation among some experts for amplifying racial biases which have long pervaded American police forces. They were concerned, they said, that at least some of the commercial products promising to predict crimes barely lived up to their names: multiple audits have found “no evidence they are effective at preventing crime,” they said.
What’s more, the lawmakers wrote that predictive algorithms may even “amount to violations of citizens’ constitutional rights to equal protection and due process under the law,” adding such technologies have the potential to violate the presumption of innocence long held as a fundamental requirement for a fair trial in the United States.
An internal evaluation by the Los Angeles Police Department in 2019 found that police strategies relying on AI-driven tools lacked sufficient supervision and “often strayed from their stated goals.” Over the past decade, the LAPD has employed a range of predictive tools used not only to forecast the locations of where crimes will purportedly occur, but to generate names of L.A. residents who based on gathered data, are fingered as likely to commit future crimes.
Some software are modeled on police departments’ worst behavior. A 2019 study published out of New York University revealed that nine police agencies had fed software data generated “during periods when the department was found to have engaged in various forms of unlawful and biased police practices.” The same researchers noted that they’d observed “few, if any, efforts by police departments or predictive system vendors to adequately assess, mitigate, or provide assurances.”
In his letter to Wyden, Assistant AG Hyun went on to note that DOJ had previously held two symposia to discuss predictive policing, one in 2009 and another in 2010, and had funded the development of a reference guide for agencies interested in predictive policing released in 2013 by the RAND Corporation.
Both RAND and experts who took part in the symposia foretold the issues the technology would encounter nearly a decade ago. Symposium members noted, for instance, that American police had a “rich history” of privacy-related problems that “have yet to be resolved.” RAND, meanwhile, noted that police partnerships with private companies may allow law enforcement to skirt constitutional safeguards against the collection of private data, writing, “The Fourth Amendment provides little to no protection for data that are stored by third parties.” Very few departments using predictive tools, RAND said, had actually evaluated the “effectiveness of the predictions” or the “interventions developed in response to their predictions.”
Despite Hyun acknowledging that the DOJ had funded predictive tools used to cast suspicion on specific individuals, the guide included in his letter appears to warn against it, stating “fewer problems” would arise from location-based targets.