The whistleblower didn't go to a newspaper. They went to a local Belgian TV station.
In July 2019, an employee working as a Google contractor leaked over 1,000 Google Assistant recordings to VRT NWS, a Flemish public broadcaster. The recordings were meant to be private conversations captured by Google Home devices. Many of them were not captured intentionally at all - 153 of the recordings VRT reviewed appeared to have been triggered by background noise or sounds that resembled the "Hey Google" wake phrase.
What the whistleblower wanted people to understand: the conversation you have near a Google Home device may be reviewed by a contractor in another country, regardless of whether you said "Hey Google."
What did the leaked recordings contain?
The leaked Google Assistant recordings included private home conversations, bedroom discussions, and personally identifiable details captured without users' awareness. Many were triggered without wake-word activation. Contractors could identify users' addresses and personal details from context despite nominal anonymization.
VRT journalists reviewed the leaked recordings and found conversations from couples arguing in their homes, children talking, people discussing personal medical matters, and ambient household audio captured from rooms where users had no idea their device had activated. The whistleblower told VRT that contractors regularly heard addresses mentioned in conversation, could identify users' neighborhoods from audio cues, and described the anonymization - replacing names with numbers - as inadequate given how much identifying context the recordings contained.
Google confirmed the practice in a statement, calling it a "language expert review" that covered "about 0.2% of all audio clips." At Google's scale, 0.2% represents an enormous number of recordings.
What did Google's terms say?
Google's Terms of Service for Google Home at the time did not disclose that human contractors would review recordings. The privacy policy referenced data collection and improvement of services in general terms. The specific practice of human contractor review was not surfaced to users in any accessible way.
Google also admitted that the leaked recordings represented a breach of its own policies - the whistleblower violated their NDA by leaking the recordings. Google's framing positioned the problem as the leak rather than the practice that was leaked. The practice itself, they maintained, was legitimate and disclosed.
The accidental activation problem
The detail that made the Google incident distinctly alarming was the accidental activation rate. Amazon and Apple's contractor programs at least operated on recordings that users had intentionally initiated by speaking a wake phrase. The VRT investigation showed that a significant percentage of Google's reviewed recordings were from devices that activated on their own.
This matters because it changes the risk model entirely. With intentional activation, a user can make an informed choice about what they say after the wake phrase. With accidental activation, the recording happens during conversations the user didn't know were being captured at all - conversations where they had every reason to believe they were speaking privately.
The Flemish Data Protection Authority opened a formal investigation following the VRT report. The broader EU regulatory response contributed to Google facing greater scrutiny on data handling practices.
The $68 million settlement
Google settled a class-action lawsuit over the voice assistant recording practices for $68 million in 2026. The settlement, like Apple's, came without an admission of wrongdoing. Like Apple's, it covered a long period of past practice. Unlike Apple's $95 million settlement, it received less attention - partly because the 2019 story competed with simultaneous Alexa and Siri revelations and partly because the Belgium whistleblower story originated outside US media.
Combined with Apple's $95 million settlement and Amazon's $25 million FTC settlement, the legal accountability for cloud voice AI privacy practices has now crossed $188 million across three companies - for the same basic practice, in the same year.
The year all three fell
April 2019: Bloomberg exposes Amazon. July 2019: VRT exposes Google. August 2019: The Guardian exposes Apple. Three separate whistleblowers and investigative teams, three months, three of the world's largest tech companies, the same practice.
This wasn't a coincidence of timing. It was a consequence of scale. When millions of devices are recording audio and that audio needs human review to improve speech recognition quality, you need a large workforce of contractors reviewing recordings. Large workforces mean people who have seen things they can't forget. Eventually, someone talks.
The structural lesson is the same one the settlements confirm financially: cloud voice AI asks you to trust a system that has demonstrated, repeatedly and at the largest possible scale, that the trust was not warranted. The trust is not asked for maliciously. It is asked for because the architecture requires it. The architecture sends audio to servers. Servers are operated by people. The people see the data.
Local inference removes the server from the chain. ToolPiper's voice dictation runs Parakeet v3 on your Mac's Neural Engine. There are no contractors because there is no server. There is no accidental activation data being reviewed in another country because the audio never traveled there. The architecture is the privacy guarantee - not a policy, not a settlement, not a promise.
Download ToolPiper at modelpiper.com.
Part of the Voice AI Privacy series. Related: Apple's $95M Siri Settlement. Amazon Alexa's Listening Program. Is ToolPiper Safe?
