Kyle Rittenhouse’s Lawyers Claim Zooming-In on an iPad Fundamentally Alters a Digital Image

Mark Richards, one of Rittenhouse’s lawyers pictured above, argued against zooming in on a video, claiming Apple’s AI creates “what it thinks is there, not what necessarily is there.”
Photo: Pool (Getty Images)

Are digital images a manufactured construct? Does the act of zooming fundamentally alter a files’ essence? Those are some of the unexpected, and at times inelegant, questions posed this week by the layers of 18-year-old Kyle Rittenhouse, who is on trial for shooting and killing two people and injuring another at a protest in Kenosha Wisconsin last year.

During the trial, first reported by The Verge, one of Rittenhouse’s lawyers named Mark Richards objected when the prosecution attempted to use an iPad’s pinch-to-zoom feature while showing a video depicting Rittenhouse shooting one of the victims. Richards claimed Apple’s use of “artificial intelligence” in its zooming process would distort the original version by “creating what it thinks is there, not what necessarily is there.”

“iPads, which are made by Apple, have artificial intelligence in them that allow things to be viewed through three dimensions and logarithms,” Richards said. “It uses artificial intelligence, or their logarithms, to create what they believe is happening.” (By “logarithm” here I’m assuming Richards meant algorithm, but we’ll skip by that for now).

To back up for a moment, Apple first brought pinch-to-zoom to its phones in 2007, before finally applying the feature to videos in 2015. In general, enlarging a digital photo usually involves image interpolation for resolution enhancement. It’s difficult to see how this fundamentally alters an image the way the defense argues—zooming in on a raster image should just enlarge the existing pixels. As for the claim that “AI” is used in the pinch-to-zoom process, Gizmodo reached out to Apple for more clarity but hasn’t heard back.

The prosecution meanwhile responded by noting zooming in on images and videos are a common practice and something jurors would intuitively understand and said the practices didn’t damage the “integrity” of the image, notes The New York Times.

G/O Media may get a commission

All colors on sale today

Gizmodo describes these premium headphones as “annoyingly incredible.” This is the lowest we’ve seen the Apple AirPods Max yet.

But if Rittenhouse’s lawyer’s argument sounds like a stretch to you, you’d be at odds with Judge Bruce Schroeder, who accepted the argument as valid after asking if the image was in “its virginal state,” notes the Times.

Regardless of the truth, the judge said it was up to the prosecution to prove the image wasn’t manipulated and only gave them around 20 minutes to find an expert. Unable to find someone qualified in such a short period of time, the prosecution ended up ditching the iPad altogether, and instead had the jury squint to watch the non-zoomed image on what appeared to be a Windows PC connected to a monitor.

Though the question of whether or not zoomed-in images can be used in court may appear, on the face of it, like a mighty leap, the Rittenhouse case could potentially offer a sneak preview of the thorny, verbose legal arguments to come if deepfakes continue to proliferate. Some deepfake videos are already impressively good and are only expected to improve—casting suspicion on all digital images and increasing the need for forensic analysis.

Though some states including Californian, Virginia, and Texas, have criminalized the modification of images using machine learning algorithms related to revenge porn and politics, legal precedents surrounding the general concept are still relatively nascent.

Though hard figures around deep fakes videos are difficult to determine, research conducted by cybersecurity company Deeptrace estimates there were 14,698 deepfaked videos online in 2019, up from 7,964 the year prior. Whatever the actual figure is, it’s clear that will likely swell in coming years as the technology becomes even more readily available to causal users through apps. If detection methods or some standardization method for verifying the originality of an image or video aren’t widely agreed upon, it’s not impossible to imagine seeing another argument like Richards’ applied to deepfakes in the not too distant future.

Read More

Mack DeGeurin