Yes. And you don't have to take our word for it.
Most AI privacy claims are promises. "We don't store your data." "Privacy Mode is enabled by default." "We're SOC2 certified." These are statements about what a company says it does with your data after it arrives on their servers. They require trust. They can't be independently verified by the person asking the question.
ToolPiper's local features are different. The audio for voice dictation, the text in your chat, the documents in your RAG index, the images you process - none of it travels over the network. That's not a privacy policy statement. It's an architectural fact. And you can confirm it yourself.
What does "local inference" actually mean?
Local inference means the AI model runs directly on your Mac's hardware - its Neural Engine, Metal GPU, and unified memory. Your input goes from your device to the model in local memory, and the output comes back the same way. No network request is made. Nothing leaves your machine.
When you hold Right Option and dictate in ToolPiper, the audio travels from your microphone to Parakeet v3 running on your Mac's Neural Engine. The transcription happens in local memory. The result pastes into your app. The audio never left the device. When you type a message to a local LLM, that text goes to llama.cpp running on your Metal GPU. The response generates in local memory. Same principle.
This is what makes the privacy claim verifiable. The processing happens on hardware you own. The network path doesn't exist.
How do you verify it yourself?
Open Activity Monitor while ToolPiper is running. Click the Network tab. Start a voice dictation session, send a chat message to a local model, run an OCR scan. Watch the network column for ToolPiper's process.
You'll see nothing for local operations. No bytes out during transcription. No upload during chat. No network activity during image processing. The column stays flat because there's no request being made.
For a more complete picture, run Little Snitch or any firewall that logs outbound connections. Block ToolPiper's network access entirely, then use every local feature. Dictation works. Chat works. RAG works. OCR works. Voice chat works. None of these features require a network connection because none of them use one.
That test is the answer to the question. Not a policy. Not a certification. A verifiable result.
What does ToolPiper send over the network?
Three things. We'll be specific.
Account authentication. When you sign in, ToolPiper sends your email address to request a one-time code. When you enter the code, it's verified server-side. When your license is checked, a license token is validated against Stripe. None of these requests contain your AI content - no prompts, no audio, no documents.
Model downloads. When you download a model from the model browser, ToolPiper fetches the GGUF file from HuggingFace. You initiate this explicitly by clicking Download. After the model is on your device, no further network activity occurs for that model. It runs entirely locally from that point on.
Cloud provider API calls, if you configure them. ToolPiper supports connecting to external providers - OpenAI, Anthropic, Groq, Mistral, and others. If you configure one and use it, your messages go to that provider's servers. This is expected and transparent: you configured the connection, you know the provider, you know where the data goes. ToolPiper doesn't proxy or intercept these requests - it sends them directly to the provider you chose.
That is the complete list. No analytics. No telemetry. No usage tracking. ToolPiper does not report back which models you run, how long you use it, what features you activate, or any other behavioral data. The ModelPiper interface bundled inside ToolPiper - the same web app that runs at modelpiper.com - is served locally from your machine when running through ToolPiper. It does not send analytics calls anywhere when running locally. Nothing about how you use the app is reported anywhere.
If you're using local models and haven't configured a cloud provider, the only outbound traffic from ToolPiper is account auth and any model downloads you've initiated. That's it.
Why can't cloud AI products answer the same question?
Cloud AI products process your data on remote servers, which means your input must travel over the network to be processed. There is no way for a user to audit what happens to their data during that transit or on those servers. Privacy policies and certifications govern what the company says it does with data - they cannot be independently verified by the user.
This is a structural limitation, not a criticism of any specific company. When you ask a cloud voice app "can I verify that my audio is private," the honest answer is no. You can read their privacy policy. You can see that they have a Privacy Mode setting. You can check their compliance certifications. But you cannot inspect their servers, you cannot monitor what happens to your audio during processing, and you cannot verify that Privacy Mode traffic follows a different path than standard traffic.
The Wispr Flow incident illustrates what happens when users look closely. A developer monitored their network traffic, discovered that audio and screenshots were being sent to cloud servers in ways that weren't clearly disclosed, published their findings, and was banned by the company for doing so. The CTO later apologized - confirming the findings were accurate. The company made real changes: Privacy Mode, training opt-in off by default, compliance certifications. But here's the question those changes don't answer: how would you know if Privacy Mode traffic were being used for training under a different label? The answer is you wouldn't, and you can't. Full analysis: Wispr Flow's Privacy Incident.
The same question applies to every cloud AI product. It's not a policy problem. It's a verification problem. The processing happens on infrastructure you can't see.
What about when you use cloud providers through ToolPiper?
If you connect OpenAI or Anthropic and use them through ModelPiper, your messages go to those providers exactly as they would from any other app. ToolPiper doesn't add any intermediate storage or logging. The request goes directly from your device to the provider's API.
The privacy question for those interactions is then the same as using those providers directly. Read OpenAI's data usage policy if you're sending OpenAI API calls. Read Anthropic's if you're using Claude. ToolPiper's local architecture doesn't change what those providers do with your data - and we don't claim otherwise.
The point isn't that cloud providers are bad. It's that local and cloud are different trust models. Local: you verify it yourself. Cloud: you trust the provider. Knowing which you're using at any moment is what matters.
Is ToolPiper open source?
The MCP server specification and shared tooling are open. The core macOS application is not currently open source. We know that for some users, open source is the only fully satisfying answer to "how do I verify this." Fair point. The network traffic verification above is the practical substitute for most people - it doesn't require reading source code, and it tests the actual runtime behavior rather than the stated implementation.
We're considering a public build of ToolPiper's inference server components. If that changes, it'll be documented at modelpiper.com.
The short answer
Local features stay local. You can verify that yourself in five minutes with Activity Monitor. Three things use the network: account auth (no content), model downloads (you initiate), and cloud provider calls (only if you configured them, only to the provider you chose).
Everything else - voice, chat, RAG, OCR, vision, image upscale, video upscale - runs on your Mac. No policy required. No trust required. Just observable architecture.
Download ToolPiper at modelpiper.com and run the Activity Monitor test yourself.
Related: Wispr Flow's Privacy Incident - what the ban revealed about cloud voice AI. ToolPiper vs Wispr Flow - full comparison. Voice Chat with Local AI - how the on-device pipeline works.
