The bot joins your meeting. It has a name - usually something like "Otter.ai Notetaker" or "Fireflies.ai Fred." The meeting host added it. You probably didn't notice, or you noticed and assumed it was fine because the host invited it.
What that bot does next is send everything said in your meeting to a cloud server in real time. Everyone on the call. Every word. Even after the meeting host has left.
In August 2025, NPR reported on a class-action lawsuit filed against Otter.ai alleging exactly this: recording participants without proper consent, including people who had no idea they were being recorded by a cloud service operated by a company they had no relationship with.
How do AI meeting recorders work?
AI meeting recorders join video calls as a visible participant bot. Audio is captured and transmitted in real time to the provider's cloud servers, where it is transcribed using AI models. The provider stores the transcript and, depending on settings and terms of service, may use the recording for AI model training. Consent is typically obtained from the meeting host only, not from all participants.
The consent structure is the core problem. When a meeting host adds Otter.ai to a Zoom call, Otter obtains an agreement from that host. The other twelve people on the call have agreed to nothing with Otter. They may not have noticed the bot. They may have noticed and assumed it was just a meeting tool. They may speak freely about clients, personnel, contracts, and confidential business - not knowing their words are being sent to a cloud service in real-time and potentially used to train AI models.
The Otter.ai lawsuit
The 2025 class-action lawsuit against Otter.ai (Brewer v. Otter.ai) includes two cases from the complaint that illustrate the stakes well beyond ordinary meeting notes.
The investor meeting: A user set up Otter to record a Zoom call with investors. After the user left the call, Otter continued recording. The resulting transcript included confidential details about the business discussed after the host had departed - information that wasn't intended for the recording at all.
The Uyghur activist interview: A journalist used Otter to transcribe an interview with a Uyghur human rights activist. The journalist later realized that Otter shares data with third parties. The implications are significant: a Uyghur activist's identity and statements, transmitted to cloud servers controlled by an American company with third-party data sharing relationships, in a context where that information could be used to identify or endanger them.
This is not an edge case. It is a foreseeable consequence of putting sensitive conversations through a cloud transcription service without understanding the data sharing structure.
The all-party consent problem
Many US states - California, Florida, Illinois, Washington, and others - require all-party consent for recording conversations. Recording someone without their knowledge or agreement in these states is not just an ethical problem. It is a legal violation.
AI meeting recorders exist in a legally uncertain position. The bot is visible on the call - so there is an argument for implied consent. But implied consent from noticing a bot in a meeting interface is different from informed consent from all participants who understand where their audio is going, how it will be stored, and whether it will be used for training.
The Otter.ai lawsuit argues that this distinction matters. The outcome will influence how AI meeting tools handle consent across the industry.
What Otter.ai says in its defense
Otter.ai claims SOC 2 compliance, encryption, and a "proprietary method" for de-identifying data before using it for training. They state they do not manually review audio recordings. They do not provide a public explanation of their de-identification process.
"We de-identify the data before training" is a statement with a known gap. Voice recordings are biometric. Vocal patterns, cadence, and acoustic signature are identifying even after names are removed. De-identification of text transcripts is more achievable. De-identification of voice audio at scale, in a way that genuinely prevents re-identification, is a harder problem than most companies acknowledge publicly.
The broader category
Otter.ai is not unique. Fireflies, Fathom, Grain, and every AI meeting recorder that joins your call as a bot operates on the same basic model: audio leaves your meeting in real time and goes to a cloud server. Consent is from the host. Everyone else is covered by whatever consent the host gave on their behalf.
The question isn't whether these tools are useful. They are. The question is whether the people whose voices are recorded and transmitted understood what they were agreeing to - and in most cases, they agreed to nothing at all.
For internal meetings where all participants are employees of the same organization and the organization has evaluated the tool - the consent structure is more defensible. For calls with external participants, clients, sources, or anyone who hasn't been informed about the recording and its destination, the structure breaks down.
Download ToolPiper at modelpiper.com. ToolPiper's voice transcription runs locally on your Mac - no cloud service joins your calls, no audio leaves your device.
Part of the Voice AI Privacy series. Related: Amazon Alexa's Listening Program. Is ToolPiper Safe?
