Apple's privacy reputation is carefully built. "What happens on your iPhone stays on your iPhone." Billboards. Ad campaigns. Years of positioning. In 2019, a whistleblower described what was actually happening with Siri recordings. In 2025, Apple settled a class-action lawsuit over it for $95 million.

This is not ancient history. The settlement covered Siri device owners from 2014 to 2024. If you owned an iPhone, iPad, Apple Watch, HomePod, or Mac with Siri during that decade, your recordings were part of the program being litigated.

What did Apple do with Siri recordings?

Apple employed contractors to review a percentage of Siri voice recordings for quality assurance purposes. Contractors regularly heard confidential conversations including medical discussions, business negotiations, and intimate moments - many of which were recorded without intentional activation of Siri.

The practice was reported in August 2019 by MIT Technology Review, based on accounts from a whistleblower named Thomas le Bonniec. Contractors described reviewing recordings that included:

Conversations between doctors and patients discussing diagnoses and treatment plans. Business negotiations. Drug deals. Couples having private conversations. All captured by Siri activating without the user's awareness - triggered by background noise, TV audio, or ambient sound that resembled the wake phrase.

The recordings were technically anonymized, but contractors reported they could often identify the person and location from context. A contractor who heard someone discuss their address, describe their home, and mention their doctor's name is not working with truly anonymous data.

What did Apple's privacy policy say?

Apple's privacy documentation at the time acknowledged that Siri data was reviewed to improve the service. The disclosure existed. What it lacked was any practical prominence - the kind of placement that would lead an ordinary user to understand that a human contractor might hear their conversation with their doctor.

This is the consistent pattern across every cloud voice privacy incident of this period. Amazon had it. Google had it. Microsoft had it. The disclosure was real. The informed consent was not. A privacy policy disclosure that requires a legal education to find and interpret is not the same as a user understanding what they're agreeing to.

How did Apple respond?

Apple's initial response in August 2019 was to suspend the Siri grading program entirely. In October 2019, with the release of iOS 13.2, Apple made significant changes: users could opt out of Siri recording review entirely, could delete their Siri audio history, and Apple stopped retaining audio recordings by default - switching to computer-generated transcripts only. Human review going forward was limited to Apple employees rather than contractors.

These were substantive changes. Apple's response was faster and more complete than most companies in the same situation. The opt-in model and audio deletion capability were genuinely useful controls.

They didn't prevent the lawsuit. The ten-year period of recording review before those changes - 2014 to 2019 - was the basis for the class action.

The $95 million settlement

Apple settled the class-action lawsuit in 2025 for $95 million without admitting wrongdoing. The settlement covers US residents who owned a Siri-enabled device between September 17, 2014 and December 31, 2024. Eligible claimants can receive up to $20 per device, for a maximum of five devices - up to $100 per person.

$95 million is a large number. It is also approximately three hours of Apple's revenue. The financial cost to Apple is not the meaningful part of this story. The meaningful part is what it confirms: a decade of users speaking to Siri with an expectation of privacy that was not matched by the actual architecture of the product.

What this reveals about cloud voice architecture

Apple is not a bad actor in this story. The Siri grading program was a standard industry practice - Amazon, Google, and Microsoft all did the same thing in the same period. Apple's post-incident response was more thorough than most. The $95 million settlement represents accountability for past practice, not ongoing misconduct.

What the incident reveals is structural. Cloud voice AI requires sending audio to remote servers for processing. Remote servers are operated by people. Quality assurance on a voice AI product requires human review of some percentage of recordings to catch errors the automated system misses. These are not independent facts - they are a chain. The architecture produces the outcome.

No privacy policy change, no opt-in mechanism, no compliance certification changes the chain. They govern what happens at specific points in it. The audio still travels. The server still receives it. The people who operate the server still have access to it.

Local voice inference breaks the chain at the first link. When Parakeet v3 runs on your Mac's Neural Engine, the audio goes from your microphone to a model in local memory. There are no contractors. There is no server. There is no chain to govern because the chain doesn't exist.

Apple knows this better than anyone. Their own privacy marketing has emphasized on-device processing for years. Siri itself has moved more processing to on-device with each hardware generation. The direction is clear. The gap between where the industry is and where it should have been is what the settlement quantifies.

Download ToolPiper at modelpiper.com for voice AI that runs entirely on your Mac's Neural Engine.

Part of the Voice AI Privacy series. Related: Wispr Flow's Privacy Incident. Is ToolPiper Safe? - how to verify local inference yourself.