Why Tinder’s background check is a major backfire

By Albert Fox Cahn and Sarah Roth 3 minute Read

It’s a pattern we’ve seen too many times: Real-world violence migrates online, and after protracted denial, platforms that profit from abuse now promise to save us. Their “solutions” are convenient–and conveniently profitable–but only make matters worse. The latest example in the news is particularly wrenching: Tinder’s flailing efforts to address the all-too-real threat of intimate partner violence with unproven, error-prone background checks. But not only will the company’s new surveillance software fail to keep users safe, it will put even more of us at risk.

It’s hard to capture every way that Tinder’s plan is poised to fail. First, background checks don’t work, not at telling you who is likely to abuse. Background checks are great at telling you about drug use and financial difficulties when users are Black, Brown, or from other over-policed communities. Want to know if someone has a history with marijuana use? Tinder is here to help. But if you want to know if someone is likely to commit an act of intimate partner violence, suddenly the data doesn’t look so good. That’s because the vast majority of abusers are never charged for their violence, and when they are, it fits the same pattern of discrimination that defines every dimension of American policing. Not only are white abusers more likely to go free, but BIPOC survivors are often arrested alongside their abusers. So, if you are a wealthy white abuser, statistically, Tinder will give you green light nearly every time.

Even worse, using these faulty checks re-victimizes survivors of intimate partner violence, placing the burden of preventing attacks on them. Soon, users who don’t run the free trial check, or who exhaust the trial and are unable to pay, could be blamed for failing to predict their own attack. This will become the latest justification for police, university officials, and others in positions of power to ignore survivors, silence their complaints, and deny them support. AI may be ineffective at preventing crime, but it is very effective at preserving the status quo.

Background checks ignore the lived experience of survivors, buying into the outdated narrative that somehow the cycle of abuse could have been stopped if only people knew their partner had a criminal history from the start. Just as damming, the system relies on the broken logic of broken windows policing, and the belief that someone should be completely exiled from society, their whole life discarded, if they have been convicted of a crime.

If Tinder truly wanted to protect its users, it wouldn’t invest in this new, misguided form of user surveillance; it would stop enabling surveillance of its users. And if it wanted to help those targeted by abusers both on and off its platform, it would invest in the countless community-based groups that provide survivors what they need most: low-tech resources like a safe place to sleep when escaping their abuser.

Yes, technology has made this problem worse–it has put people in harm’s way–but the solution isn’t more unproven, discriminatory technology. Instead, the solution is to listen to survivors. Real protection prioritizes their expressed needs, like supporting financial independence, data security, and the groups that fight for and with survivors of intimate partner violence every day.

Albert Fox Cahn is the founder and executive director of the Surveillance Technology Oversight Project (S.T.O.P.), a New York-based civil rights and privacy group, and a visiting fellow at Yale Law School’s Information Society Project. Sarah Roth is an advocacy and communications intern at S.T.O.P., a recent graduate of Vassar College, and prospective JD candidate.

Read More

Albert Fox Cahn and Sarah Roth