Apple Says It Won’t Let the Government Turn Its Child Abuse Detection Tools Into a Surveillance Weapon

Photo: GIUSEPPE CACACE / AFP (Getty Images)

After facing a whole lot of criticism, Apple has doubled down and defended its plans to launch controversial new tools aimed at identifying and reporting child sex abuse material (or CSAM) on its platforms.

Last week, the company announced several pending updates, outlining them in a blog post entitled “Expanded Protections for Children.” These new features, which will be rolled out later this year with the release of the iOS 15 and iPadOS 15, are designed to use algorithmic scanning to search for and identify child abuse material on user devices. One tool will scan photos on device that have been shared with iCloud for signs of CSAM, while the other feature will scan iMessages sent to and from child accounts in an effort to stop minors from sharing or receiving messages that include sexually explicit images. We did a more detailed run-down on both features and the concerns about them here.

The company barely had time to announce its plans last week before it was met with a vociferous outcry from civil liberties organizations, who have characterized the proposed changes as well intentioned but ultimately a slipper slope toward a dangerous erosion of personal privacy.

On Monday, Apple published a response to many of the concerns that have been raised. The company specifically denied that its scanning tools might someday be repurposed to hunt for other kinds of material on users’ phones and computers other than CSAM. Critics have worried that a government (ours or someone else’s) could pressure Apple to add or change the new features—to make them, for instance, a broader tool of law enforcement.

However, in a rare instance of a corporation making a firm promise not to do something, Apple said definitively that it would not be expanding the reach of its scanning capabilities. According to the company:

Apple will refuse any such demands [from a government]. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.

G/O Media may get a commission

During a follow-up Q&A session with reporters on Monday, Apple further clarified that the features are only being launched in the U.S., as of right now. While some concerns have been raised about whether a foreign government could corrupt or subvert these new tools to employ them as a form of surveillance, Apple said Monday that it would be carefully conducting legal evaluations on a country-by-country basis before it releases the tools abroad, to ensure there is no chance of abuse.

Understandably, this whole thing has confused a lot of people, and there are still questions swirling as to how these features will actually work and what that means for your privacy and device autonomy. Here are a couple of points Apple has recently clarified:

  • Weirdly, iCloud has to be activated for its CSAM detection feature to actually work. There has been some confusion about this point, but essentially Apple is only searching through content that is shared with its cloud system. Critics have pointed out that this would seem to make it exceedingly easy for abusers to elude the informal dragnet that Apple has set up, as all they would have to do to hide CSAM content on their phone would be to opt out of iCloud. Apple said Monday it still believes the system will be effective.
  • Apple is not loading a database of child porn onto your phone. Another point that the company was forced to clarify on Monday is that it will not, in fact, be downloading actual CSAM onto your device. Instead, it is using a database of “hashes”—digital fingerprints of specific, known child abuse images, which are represented as numerical code. That code will be loaded into the phone’s operating system, which allows for images uploaded to the cloud to be automatically compared against the hashes in the database. If they aren’t an identical match, however, Apple doesn’t care about them.
  • iCloud won’t just be scanning new photos—it plans to scan all of the photos currently in its cloud system. In addition to scanning photos that will be uploaded to iCloud in the future, Apple also plans to scan all of the photos currently stored on its cloud servers. During Monday’s call with reporters, Apple reiterated that this was the case.
  • Apple claims the iMessage update does not share any information with Apple or with law enforcement. According to Apple, the updated feature for iMessage does not share any of your personal information with the company, nor does it alert law enforcement. Instead, it merely alerts a parent if their child has sent or received a texted image that Apple’s algorithm has deemed sexual in nature. “Apple never gains access to communications as a result of this feature in Messages. This feature does not share any information with Apple, NCMEC or law enforcement,” the company said. The feature is only available for accounts that have been set up as families in iCloud, the company says.

Despite assurances, privacy advocates and security experts are still not super impressed—and some are more than a little alarmed. In particular, on Monday, well-known security expert Matthew Green posited the following hypothetical scenario—which was contentious enough to inspire a minor Twitter argument between Edward Snowden and ex-Facebook security head Alex Stamos in the reply section:


So, suffice it to say, a lot of people still have questions. We’re all in pretty unknown, messy territory here. While it’s impossible to knock the point of Apple’s mission, the power of the technology that it is deploying has caused alarm, to say the least.

Read More

Lucas Ropek