Technical papers, product updates, and press releases from the ModelPiper team.
Step-by-step guide to install and configure Ollama on Mac — then discover ToolPiper, the one-app alternative with built-in inference, voice, vision, and 41 MCP tools.
How to install and use MLX-Audio for local text-to-speech on Mac — then discover ToolPiper, which bundles the same models in a native app with zero Python required.
How ToolPiper's accessibility-tree-first approach to browser automation enables AI-powered test generation that works with any model provider — not just MCP-aware clients.
How ToolPiper became the first MCP server to unify LLM inference, TTS, STT, embeddings, OCR, vision, browser automation, and RAG behind a single install — all in native Swift.
A deep dive into PiperSR's double-buffered ANE+Metal pipeline that upscales 360p video to 720p at 44.4 FPS — 1.5x realtime on Apple Silicon.
Detect human poses, track skeletons, and stream motion capture data in real time — all running on your Mac's Neural Engine. No cloud, no markers, no special hardware.
Upscale video from 360p to 720p at 44 FPS on your Mac's Neural Engine — no cloud upload, no watermark, audio preserved automatically.
Index your files and ask questions about them using local AI — retrieval-augmented generation running entirely on your Mac, no documents uploaded anywhere.
Upscale photos and screenshots 2x or 4x on your Mac using CoreML super-resolution models — no upload, no API, no quality loss from compression.
Reasoning models think step by step before answering. Now they run locally on your Mac — private, free per query, with full chain-of-thought transparency.
Drop an image into ModelPiper — a vision model describes what's in it, then text-to-speech reads the description aloud. All on-device, all private.
Extract text from scanned documents, photos, and screenshots using Apple Vision OCR — on-device, fast, and private. No cloud upload required.
Select any region of your screen, ask a question, get an answer — all locally. VisionPiper captures your screen and feeds it to a vision model running on your Mac.
Clone any voice from a short audio sample, entirely on your Mac — no biometric data uploaded anywhere. Voice is too sensitive for the cloud.
Real-time speech translation running entirely on your Mac — speak English, hear Portuguese. No cloud, no Google Translate, no data leaving your machine.
Drop a meeting recording into ModelPiper and get a structured summary — decisions, action items, key points — without uploading confidential audio to any cloud service.
A full voice conversation with AI — speech-to-text, language model, text-to-speech — running entirely on your Mac. No cloud, no latency, no data leaving your machine.
Modern AI text-to-speech voices sound human — and they run locally on your Mac's GPU. No cloud service ever sees your text. Here's how to use it.
Transcribe meetings, lectures, and voice memos on your Mac with Whisper-class accuracy — entirely offline, with no audio uploaded to any server.
Run a private AI chatbot entirely on your Mac — no data leaves your machine, no API keys, no internet required. Here's how local LLM chat actually works on Apple Silicon.
Your Mac has dedicated AI hardware built in. Here's why local-first AI matters — privacy by architecture, zero API costs, no rate limits — and how ModelPiper makes it practical.