Apple's Not Digging Itself Out of This One

The ongoing drama surrounding the company's proposed new child abuse prevention tools has taken another turn.

We may earn a commission from links on this page.
Image for article titled Apple's Not Digging Itself Out of This One
Photo: Johannes Simon (Getty Images)

Online researchers say they have found flaws in Apple’s new child abuse detection tool that could allow bad actors to target iOS users. However, Apple has denied these claims, arguing that it has intentionally built safeguards against such exploitation.

It’s just the latest bump in the road for the rollout of the company’s new features, which have been roundly criticized by privacy and civil liberties advocates since they were initially announced two weeks ago. Many critics view the updates—which are built to scour iPhones and other iOS products for signs of child sexual abuse material (CSAM)—as a slippery slope towards broader surveillance.

Advertisement

The most recent criticism centers around allegations that Apple’s “NeuralHash” technology—which scans for the bad images—can be exploited and tricked to potentially target users. This started because online researchers dug up and subsequently shared code for NeuralHash as a way to better understand it. One Github user, AsuharietYgvar, claims to have reverse-engineered the scanning tech’s algorithm and published the code to his page. Ygvar wrote in a Reddit post that the algorithm was basically available in iOS 14.3 as obfuscated code and that he had taken the code and rebuilt it in a Python script to assemble a clearer picture of how it worked.

Advertisement

Problematically, within a couple of hours, another researcher said they were able to use the posted code to trick the system into misidentifying an image, creating what is called a “hash collision.”

Advertisement

Apple’s new system is automated to search for unique digital signatures of specific, known photos of child abuse material—called “hashes.” A database of CSAM hashes, compiled by the National Center for Missing and Exploited Children, will actually be encoded into future iPhone operating systems so that phones can be scanned for such material. Any photo that a user attempts to upload to iCloud will be scanned against this database to ensure that such images are not being stored in Apple’s cloud repositories.

However, “hash collisions” involve a situation in which two totally different images produce the same “hash” or signature. In the context of Apple’s new tools, this has the potential to create a false-positive, potentially implicating an innocent person for having child porn, critics claim. The false-positive could be accidental or intentionally triggered by a malicious actor.

Advertisement

Cyber professionals wasted no time in sharing their opinions about this development on Twitter:

Advertisement


Apple, however, has made the argument that it has set up multiple fail-safes to stop this situation from ever really happening.

For one thing, the CSAM hash database encoded into future iPhone operating systems is encrypted, Apple says. This means that there is very little chance of an attacker discovering and replicating signatures that resemble the images contained within it unless they themselves are in possession of actual child porn, which is a federal crime.

Advertisement

Apple also argues that its system is specifically set up to identify collections of child pornography—as it is only triggered when 30 different hashes have been identified. This fact makes the event of a random false-positive trigger highly unlikely, the company has argued.

Finally, if other mechanisms somehow fail, a human reviewer is tasked with looking over any flagged cases of CSAM before the case is sent on to NCMEC (who would then tip-off police). In such a situation, a false-positive could be weeded out manually before law enforcement ever ostensibly gets involved.

Advertisement

In short, Apple and its defenders argue that a scenario in which a user is accidentally flagged or “framed” for having CSAM is somewhat hard to imagine.

Jonathan Mayer, an assistant professor of computer science and public affairs at Princeton University, told Gizmodo that the fears surrounding a false-positive problem may be somewhat overblown, though there are much broader concerns about Apple’s new system that are legitimate. Mayer would know, as he helped design the system that Apple’s CSAM-detection tech is actually based on.

Advertisement

Mayer was part of a team that recently conducted research into how algorithmic scanning could be deployed to search for harmful content on devices while maintaining end-to-end encryption. According to Mayer, this system had obvious shortcomings. Most alarmingly, researchers noted that it could be easily co-opted by a government or other powerful entity, which might repurpose its surveillance tech to look for other kinds of content. “Our system could easily be repurposed for surveillance and censorship,” writes Mayer and his research partner, Anunay Kulshrestha, in an op-ed in the Washington Post. “The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching data base, and the person using that service would be none the wiser.”

The researchers were “so disturbed” by their findings that they subsequently declared the system dangerous, and warned that it shouldn’t be adopted by a company or organization until more research could be done to curtail the potential dangers it presented. However, not long afterward, Apple announced its plans to roll out a nearly identical system to over 1.5 billion devices in an effort to scan for CSAM. The op-ed ultimately notes that Apple is “gambling with security, privacy and free speech worldwide” by implementing a similar system in such a hasty, slapdash way.

Advertisement

Matthew Green, a well-known cybersecurity professional, has similar concerns. In a call with Gizmodo, Green said that not only is there an opportunity for this tool to be exploited by a bad actor, but that Apple’s decision to launch such an invasive technology so swiftly and unthinkingly is a major liability for consumers. The fact that Apple says it has built safety nets around this feature is not comforting at all, he added.

“You can always build safety nets underneath a broken system,” said Green, noting that it doesn’t ultimately fix the problem. “I have a lot of issues with this [new system]. I don’t think it’s something that we should be jumping into—this idea that local files on your device will be scanned.” Green further affirmed the idea that Apple had rushed this experimental system into production, comparing it to an untested airplane whose engines are held together via duct tape. “It’s like Apple has decided we’re all going to go on this airplane and we’re going to fly. Don’t worry [they say], the airplane has parachutes,” he said.

Advertisement

A lot of other people share Green and Mayer’s concerns. This week, some 90 different policy groups signed a petition, urging Apple to abandon its plan for the new features. “Once this capability is built into Apple products, the company and its competitors will face enormous pressure — and potentially legal requirements — from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable,” the letter notes. “We urge Apple to abandon those changes and to reaffirm the company’s commitment to protecting its users with end-to-end encryption.”

Advertisement