Google falsely told the police that a father was a molesting his son, Pluralistic takes up the case
Mark’s toddler had a painful, swollen penis. His wife contacted their doctor, whose nurse asked Mark to send him a picture of the toddler’s penis, because the pandemic was raging and the doctor wasn’t seeing patients in person. Mark’s phone synched the photo to his Google Photos account, and Google’s scanning tools automatically detected the picture of a child’s penis and turned Mark into the SFPD, accusing him of molesting his son.
Mark and his wife took several pictures of their son's penis, including one that contained Mark's hand. The child had a bacterial infection, which was quickly alleviated with antibiotics that the doctor prescribed via telemedicine.
Google refused to listen to Mark's explanation. Instead, they terminated his account, seizing more than a decade's worth of personal and business email, cloud files, and calendar entries. He lost all the family photos he'd synched with Google Photos (including all the photos of his toddler from birth, on). He even lost his mobile plan, because he's a Google Fi user. He lost access to Google Authenticator and couldn't sign into any of his other online accounts to tell them that he had a new, non-Gmail email address.
Mark received an envelope from the SFPD telling him that Google had contacted the police department, accusing him of producing child sexual abuse material (CSAM), and that the company had secretly given the police full access to all of his files and data, including his location and search history, as well as all his photos and videos.
The reason the police had to mail him all this stuff? Google had shut down his phone number and so they couldn't reach him.
To SFPD's credit, they'd figured out what was going on and decided Mark wasn't a child molester. To Google's shame, they continue to hold all his data hostage – including his address book with the contact info for everyone he is personally or professionally connected to, denying him access to it.
Google says they won't give Mark his account back because they found another "problematic" image in his files: "a young child lying in bed with an unclothed woman." Mark doesn't know which picture they mean (he no longer has access to any of his photos), but he thinks it was probably an intimate photo he captured of his son and wife together in bed one morning ("If only we slept with pajamas on, this all could have been avoided.").
Writing for the New York Times, Kashimir Hill discusses another, similar case, involving a Houston dad called Cassio, whose doctor asked him to send in photos of his child's genitals for diagnostic purposes. Like Mark, Cassio was cleared by police, and, like Mark, Cassio is locked out of his Gmail account, along with all the services associated with it.
Hill spoke with my EFF colleague Jon Callas, who criticized Google, saying that private family photos should be a "private sphere" and not subject to routine scanning by algorithms or review by moderators. Google claims that they only scan your photos when you take an "affirmative action" related to them, but this includes automatically uploading your photos to Google Photos, which is the default behavior on Android devices.
Also cited in the article is Kate Klonick, a cyberlaw prof and expert on content moderation. Klonick pointed out that this was "doubly dangerous in that it also results in someone being reported to law enforcement," suggesting that this could have resulted in a loss of custody if the police had been a little less measured.
Klonick criticized Google for the lack of a "robust process" for handling this kind of automated filter error. Hill describes the "AI" tools Google uses to automatically flag potential CSAM. As is so often the case with automated filtering tools, the flagging takes place in a nanosecond, while the process for questioning its judgment takes months or years, or forever.
Last summer, I called Google and its Big Tech competitors "utilities governed like empires." The companies deliberately pursued a strategy of becoming indispensable to us, declaring mission statements like "organize all the world's information" and backing them up with vertical stacks of products designed to capture your whole digital life.
That is, the tech giants set out to become utilities, as important to your life as your electricity and water – and they succeeded. However, they continue to behave as though they are simply another business, whose commercial imperatives – including the arbitrary cancellation of your services without appeal – are private matters.
Some people say this means we should just turn these companies into actual utilities, but I think that's the wrong impulse. The problem with (say) Facebook, isn't merely that Zuck is monumentally unqualified to be the unaccountable self-appointed dictator of three billion peoples' digital lives. The problem is that no one should have that job. We should abolish that job.
Which is why I'm so interested in interoperability – including a mix of state-imposed interop obligations and protecting interoperators' self-help measures like reverse-engineering, scraping and bots.
That is a path to pluralizing power over the necessities of our lives – use the power of the state to set limits on the conduct of online platforms (say, by passing strong privacy laws with a private right of action), which makes sure that no matter which choice a user makes, they won't be exploited by online companies. Then use the power of the state to safeguard interoperability, so that users who don't like the way an online host uses its discretion can easily leave, without surrendering their data or their social connections:
Rather than entrusting the US government – including its policing and espionage arms – to run our digital lives, and the digital lives of non-Americans around the world whom the US government explicitly disclaims any duty to, we can ask the government to do a much narrower job. We can ask them to prevent companies from harming us, and we can ask them to force companies not to take our data and social connections hostage. That way, we don't have to ask the government – which might be run by e.g. Ron Desantis in a couple years – to decide which conversations are lawful to have:
Instead, we can create our own, community-run and community-managed online spaces and services.
Correction: An earlier draft of this story misstated a technical detail; Mark didn’t email his photo to his doctor; rather, he took the photo with his phone and the image was automatically synched to his Google Photos account, triggering a scan.