Google Flagged Parents' Photos of Sick Children as Sexual Abuse

In at least two cases, Google has shut down accounts over pictures of kids containing nudity, requested by pediatricians for diagnosing illness.

We may earn a commission from links on this page.
Google uses Microsoft’s PhotoDNA screening algorithm to look for potential child sexual abuse violations. Occasional false positives are an inevitability with the tool.
Google uses Microsoft’s PhotoDNA screening algorithm to look for potential child sexual abuse violations. Occasional false positives are an inevitability with the tool.
Photo: VDB Photos (Shutterstock)

Two fathers, one in San Francisco and another in Houston, were separately investigated by the police on suspicion of child abuse and exploitation after using Android phones (owned by Google) to take photos of their sons’ genitals for medical purposes. Though in both cases the police determined that the parents had committed no crime, Google didn’t come to the same conclusion—permanently deactivating their accounts across all its platforms, according to a report from The New York Times.

The incidents highlight what can go wrong with automatic photo screening and reporting technology, and the thorny territory tech companies wade into when they begin relying on it. Without context, discerning an innocent image from abuse can be near-impossible—even with the involvement of human screeners.

Advertisement

Google, like many companies and online platforms, uses Microsoft’s PhotoDNA—an algorithmic screening tool meant to accurately suss out photos of abuse. According to the company’s self-reported data, it identified 287,368 instances of suspected abuse in the first six months of 2021 alone. According to Google, those incident reports come from multiple sources, not limited to the automated PhotoDNA tool. “Across Google, our teams work around-the-clock to identify, remove, and report this content, using a combination of industry-leading automated detection tools and specially-trained reviewers. We also receive reports from third parties and our users, which complement our ongoing work,” a statement on Google’s website reads.

Advertisement

Some privacy advocates, like the libertarian Electronic Frontier Foundation, have vocally opposed the expansion of such screening technologies. Yet child sexual abuse and exploitation is (rightfully) a particularly difficult topic around which to advocate privacy above all else.

Advertisement

What seems clear is that no automated screening system is perfect, false reports and detections of abuse are inevitable, and companies likely need a better mechanism for dealing with them.

What happened?

According to the Times, in the San Francisco case, Mark (who’s last name was withheld), took photos of his toddler’s groin to document swelling, after he’d noticed his son was experiencing pain in the region. Then, his wife scheduled an emergency video consultation with a doctor for the next morning. It was February 2021 and, at that stage of the pandemic, going to a medical office in-person, unless absolutely necessary, was generally inadvisable.

Advertisement

The scheduling nurse requested photos be sent over ahead of time, so the doctor could review them in advance. Mark’s wife texted the photos from her husband’s phone to herself, and then uploaded them from her device to the medical provider’s message system. The doctor prescribed antibiotics, and the toddler’s condition cleared up.

However, two days after initially taking the photos of his son, Mark received a notification that his account had been disabled for “harmful content” that was in “severe violation of Google’s policies and might be illegal,” reported the Times. He appealed the decision, but received a rejection.

Advertisement

Simultaneously, though Mark didn’t know it yet, Google also reported the photos to the National Center for Missing and Exploited Children’ CyberTipline, which escalated the report to law enforcement. Ten months later, Mark received notice from the San Francisco Police Department that they’d investigated him, based on the photos and a report from Google. The police had issued search warrants to Google requesting everything in Mark’s account, including messages, photos and videos stored with the company, internet searches, and location data.

The investigators concluded that no crime had occurred, and was closed by the time Mark found out it had happened. He tried to use the police report to appeal to Google again, and get his account back, but his request was denied again.

Advertisement

What were the consequences?

Though it seems like a small inconvenience, compared with the possibility of child abuse, the loss of Mark’s Google account was reportedly a major hassle. From the Times:

Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.

“The more eggs you have in one basket, the more likely the basket is to break,” he said.

Advertisement

In the very similar Houston, Texas case reported on by the Times, another father was asked to take photos of his son’s “intimal parts” by a pediatrician to diagnose an infection. Those images were automatically backed up to Google Photos (note: automated cloud storage isn’t always a good idea), and sent from the father to his wife via Google messenger. The couple was in the middle of purchasing a new home at the time, and because the pictures ultimately led to the dad’s email address being disabled, they faced added complications.”

In an emailed statement, a Google spokesperson told Gizmodo the following:

Child sexual abuse material (CSAM) is abhorrent and we’re committed to preventing the spread of it on our platforms. We follow US law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms. Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice. Users have the ability to appeal any decision, our team reviews each appeal and we will reinstate an account if an error has been made.

Advertisement

Though errors seem to have been made in these two cases, and obviously Google did not reinstate the accounts in question. The company did not immediately respond to Gizmodo’s follow-up questions. And the repercussions could’ve likely been worse than just deleted accounts.

It’s difficult to “account for things that are invisible in a photo, like the behavior of the people sharing an image or the intentions of the person taking it,” Kate Klonick, a lawyer and law professor who focuses on privacy at St. John’s University, told the NYT. “This would be problematic if it were just a case of content moderation and censorship,” Klonick added. “But this is doubly dangerous in that it also results in someone being reported to law enforcement.”

Advertisement

And some companies do seem to be well aware of the complexity and possible danger automatic screening tools pose. Apple announced plans for its own CSAM screening system back in 2021. However, after backlash from security experts, the company delayed its plans before seemingly scrapping them entirely.

Advertisement