---
title: "Is ToolPiper Safe? What You Can Verify and How"
description: "ToolPiper's local features process everything on your Mac. No audio, prompts, or documents travel over the network. Here's exactly what stays local, what doesn't, and how to confirm it yourself."
date: 2026-05-01
author: "Ben Racicot"
tags: ["Privacy", "macOS", "Local AI", "Apple Silicon", "Security"]
type: "article"
canonical: "https://modelpiper.com/blog/is-toolpiper-safe/"
---

# Is ToolPiper Safe? What You Can Verify and How

> ToolPiper's local features process everything on your Mac. No audio, prompts, or documents travel over the network. Here's exactly what stays local, what doesn't, and how to confirm it yourself.

## TL;DR

ToolPiper's local features - voice dictation, chat, RAG, OCR, image and video upscale - process everything on your Mac. No audio, prompts, or documents leave your device. There is no analytics, no telemetry, and no usage tracking in ToolPiper or the bundled ModelPiper interface. You can verify all of this yourself by monitoring network traffic while ToolPiper runs. This article covers exactly what stays local, the three things ToolPiper does send over the network (account auth, model downloads, and cloud provider calls you configure), and why cloud AI products can't answer the same question.

Yes. And you don't have to take our word for it.

Most AI privacy claims are promises. "We don't store your data." "Privacy Mode is enabled by default." "We're SOC2 certified." These are statements about what a company says it does with your data after it arrives on their servers. They require trust. They can't be independently verified by the person asking the question.

ToolPiper's local features are different. The audio for voice dictation, the text in your chat, the documents in your RAG index, the images you process - none of it travels over the network. That's not a privacy policy statement. It's an architectural fact. And you can confirm it yourself.

## What does "local inference" actually mean?

Local inference means the AI model runs directly on your Mac's hardware - its Neural Engine, Metal GPU, and unified memory. Your input goes from your device to the model in local memory, and the output comes back the same way. No network request is made. Nothing leaves your machine.

When you hold Right Option and dictate in ToolPiper, the audio travels from your microphone to Parakeet v3 running on your Mac's Neural Engine. The transcription happens in local memory. The result pastes into your app. The audio never left the device. When you type a message to a local LLM, that text goes to llama.cpp running on your Metal GPU. The response generates in local memory. Same principle.

This is what makes the privacy claim verifiable. The processing happens on hardware you own. The network path doesn't exist.

## How do you verify it yourself?

Open Activity Monitor while ToolPiper is running. Click the Network tab. Start a voice dictation session, send a chat message to a local model, run an OCR scan. Watch the network column for ToolPiper's process.

You'll see nothing for local operations. No bytes out during transcription. No upload during chat. No network activity during image processing. The column stays flat because there's no request being made.

For a more complete picture, run Little Snitch or any firewall that logs outbound connections. Block ToolPiper's network access entirely, then use every local feature. Dictation works. Chat works. RAG works. OCR works. Voice chat works. None of these features require a network connection because none of them use one.

That test is the answer to the question. Not a policy. Not a certification. A verifiable result.

## What does ToolPiper send over the network?

Three things. We'll be specific.

**Account authentication.** When you sign in, ToolPiper sends your email address to request a one-time code. When you enter the code, it's verified server-side. When your license is checked, a license token is validated against Stripe. None of these requests contain your AI content - no prompts, no audio, no documents.

**Model downloads.** When you download a model from the model browser, ToolPiper fetches the GGUF file from HuggingFace. You initiate this explicitly by clicking Download. After the model is on your device, no further network activity occurs for that model. It runs entirely locally from that point on.

**Cloud provider API calls, if you configure them.** ToolPiper supports connecting to external providers - OpenAI, Anthropic, Groq, Mistral, and others. If you configure one and use it, your messages go to that provider's servers. This is expected and transparent: you configured the connection, you know the provider, you know where the data goes. ToolPiper doesn't proxy or intercept these requests - it sends them directly to the provider you chose.

That is the complete list. No analytics. No telemetry. No usage tracking. ToolPiper does not report back which models you run, how long you use it, what features you activate, or any other behavioral data. The ModelPiper interface bundled inside ToolPiper - the same web app that runs at modelpiper.com - is served locally from your machine when running through ToolPiper. It does not send analytics calls anywhere when running locally. Nothing about how you use the app is reported anywhere.

If you're using local models and haven't configured a cloud provider, the only outbound traffic from ToolPiper is account auth and any model downloads you've initiated. That's it.

## Why can't cloud AI products answer the same question?

Cloud AI products process your data on remote servers, which means your input must travel over the network to be processed. There is no way for a user to audit what happens to their data during that transit or on those servers. Privacy policies and certifications govern what the company says it does with data - they cannot be independently verified by the user.

This is a structural limitation, not a criticism of any specific company. When you ask a cloud voice app "can I verify that my audio is private," the honest answer is no. You can read their privacy policy. You can see that they have a Privacy Mode setting. You can check their compliance certifications. But you cannot inspect their servers, you cannot monitor what happens to your audio during processing, and you cannot verify that Privacy Mode traffic follows a different path than standard traffic.

The Wispr Flow incident illustrates what happens when users look closely. A developer monitored their network traffic, discovered that audio and screenshots were being sent to cloud servers in ways that weren't clearly disclosed, published their findings, and was banned by the company for doing so. The CTO later apologized - confirming the findings were accurate. The company made real changes: Privacy Mode, training opt-in off by default, compliance certifications. But here's the question those changes don't answer: how would you know if Privacy Mode traffic were being used for training under a different label? The answer is you wouldn't, and you can't. Full analysis: [Wispr Flow's Privacy Incident](/blog/wispr-flow-privacy-incident).

The same question applies to every cloud AI product. It's not a policy problem. It's a verification problem. The processing happens on infrastructure you can't see.

## What about when you use cloud providers through ToolPiper?

If you connect OpenAI or Anthropic and use them through ModelPiper, your messages go to those providers exactly as they would from any other app. ToolPiper doesn't add any intermediate storage or logging. The request goes directly from your device to the provider's API.

The privacy question for those interactions is then the same as using those providers directly. Read OpenAI's data usage policy if you're sending OpenAI API calls. Read Anthropic's if you're using Claude. ToolPiper's local architecture doesn't change what those providers do with your data - and we don't claim otherwise.

The point isn't that cloud providers are bad. It's that local and cloud are different trust models. Local: you verify it yourself. Cloud: you trust the provider. Knowing which you're using at any moment is what matters.

## Is ToolPiper open source?

The MCP server specification and shared tooling are open. The core macOS application is not currently open source. We know that for some users, open source is the only fully satisfying answer to "how do I verify this." Fair point. The network traffic verification above is the practical substitute for most people - it doesn't require reading source code, and it tests the actual runtime behavior rather than the stated implementation.

We're considering a public build of ToolPiper's inference server components. If that changes, it'll be documented at [modelpiper.com](https://modelpiper.com).

## The short answer

Local features stay local. You can verify that yourself in five minutes with Activity Monitor. Three things use the network: account auth (no content), model downloads (you initiate), and cloud provider calls (only if you configured them, only to the provider you chose).

Everything else - voice, chat, RAG, OCR, vision, image upscale, video upscale - runs on your Mac. No policy required. No trust required. Just observable architecture.

Download ToolPiper at [modelpiper.com](https://modelpiper.com) and run the Activity Monitor test yourself.

_Related: [Wispr Flow's Privacy Incident](/blog/wispr-flow-privacy-incident) - what the ban revealed about cloud voice AI. [ToolPiper vs Wispr Flow](/blog/toolpiper-vs-wispr-flow) - full comparison. [Voice Chat with Local AI](/blog/voice-chat-mac-local-ai) - how the on-device pipeline works._

## Steps

### 1. Verify ToolPiper's local inference yourself

## FAQ

### Is ToolPiper safe to use for sensitive or confidential content?

Yes, when using local features. Voice dictation, local LLM chat, RAG document Q&A, OCR, and image processing all run on your Mac with no network activity. You can verify this by monitoring ToolPiper's network traffic in Activity Monitor during use. If you connect a cloud provider like OpenAI, that content goes to OpenAI - the same as using them directly.

### Does ToolPiper send my voice recordings anywhere?

No. Voice dictation and voice chat use Parakeet v3, a speech recognition model that runs on your Mac's Neural Engine. The audio goes from your microphone to the model in local memory and nowhere else. There is no upload step. You can confirm this by watching ToolPiper's network activity in Activity Monitor during a dictation session - you'll see no outbound traffic.

### Does ToolPiper use my data to train AI models?

No. ToolPiper has no mechanism to collect or transmit your AI content - your prompts, voice, documents, or images - because local inference never sends that content over the network. There is no server receiving your data, so there is no training pipeline to feed. This is an architectural fact, not a policy commitment.

### Does ToolPiper collect analytics or usage data?

No. ToolPiper does not collect analytics, telemetry, or usage tracking of any kind. It does not report which models you run, which features you use, how long sessions last, or any other behavioral data. The ModelPiper interface bundled inside ToolPiper - which runs locally from your machine - does not send analytics calls when running locally. Nothing about how you use the app is reported anywhere.

### What does ToolPiper actually send over the network?

Three things: account authentication when you sign in (email and license token, no AI content), model downloads when you explicitly click Download in the model browser, and cloud provider API calls if you configure a provider like OpenAI or Anthropic. No analytics, no telemetry, no usage tracking. Nothing else leaves your device.

### How is ToolPiper different from cloud voice AI like Wispr Flow?

Cloud voice products send audio to remote servers for processing. ToolPiper processes audio on your Mac's Neural Engine. The privacy difference is not a policy difference - it's an architectural one. Cloud products can offer Privacy Mode or zero-retention promises, but audio must still leave your device to be processed. With ToolPiper, audio never leaves your device. You can verify this yourself by monitoring network traffic - something you cannot do with cloud processing.

### Is ToolPiper open source?

The MCP server specification and shared tooling are open. The core macOS application is not currently open source. For users who want to verify privacy without reading source code, the practical test is monitoring network traffic during local feature use - ToolPiper shows no outbound activity for voice, chat, RAG, OCR, or image processing. This tests actual runtime behavior rather than stated implementation.
