Technical papers, product updates, and press releases from the ModelPiper team.
Install ToolPiper and connect 111 MCP tools to Claude Code in under a minute. Local inference, browser automation, desktop control, testing, and video - all on your Mac.
The write-run-check-fix loop breaks when check means reading terminal scrollback. A structured log store turns AI debugging from guessing into querying.
When a 3-step AI pipeline returns wrong output, which step failed? Correlation IDs group every event from one execution into a single trace.
LogPiper captures full request and response bodies for every API call through ToolPiper. When your local LLM returns garbage, the exact prompt is in the logs.
Every MCP observability tool monitors agents from outside. LogPiper is a logging service the agent uses directly - bidirectional, persistent, and queryable.
Python, shell, JavaScript, Swift - all logging to one HTTP endpoint. No SDK, no dependency, no config. LogPiper aggregates logs from any process on localhost.
Cursor Debug Mode gives your AI a temporary log server. LogPiper gives it a persistent, queryable log store. Both solve the same problem differently.
Tell Claude Code to POST logs to LogPiper, reproduce the bug, query the errors, hand them back. A persistent debugging feedback loop in four steps.
AI gets 80% of the code right. The last 20% requires debugging. LogPiper gives you structured error data without needing to use a traditional debugger.
ToolPiper includes a real-time logging service any app, script, or MCP tool can write to and query. Fire-and-forget ingestion, structured queries, SSE streaming, correlation IDs, and full HTTP body capture - zero setup.
ActionPiper runs STT on Apple's Neural Engine with zero internet dependency. Push-to-talk and 142 voice commands work offline. Free.
AI coding assistants write code but can't see runtime behavior. LogPiper gives them a queryable log store - the missing feedback loop for agentic development.
Push-to-talk dictation for developers on Apple's Neural Engine. 140ms, private, free. Plus 29 MCP tools for system control from Claude Code.
Four genuinely free dictation options on Mac. Apple built-in, Whisper.cpp, MacWhisper, and ActionPiper compared for accuracy, latency, and privacy.
ActionPiper runs Parakeet STT on Apple's Neural Engine. 140ms, 25 languages, fully offline. Your voice never leaves your Mac.
Record browser tests visually, replay with self-healing selectors, export to Playwright or Cypress. No code required. AX-native, local, Chrome-based.
ActionPiper runs STT on Apple's Neural Engine. Free, private, 140ms latency, plus 142 voice commands Wispr Flow doesn't have.
Passive health monitors run after every test step. Console errors, JS exceptions, HTTP failures - caught automatically with zero configuration.
Hard numbers on E2E test maintenance. 30-50% of testing time goes to upkeep. AX selectors and self-healing change the equation. Free on Mac.
Three temporal assertion modes that replace brittle waits. Always holds for duration, eventually within deadline, next on immediate check. 100ms polling.
Interaction coverage shows which buttons, forms, and links your tests actually touch - not just which code lines execute. PiperProbe scans the AX tree.
Enterprise testing tools proved that self-healing and visual authoring matter. They also gate those features behind $500+/month pricing. PiperTest delivers the same capabilities locally on your Mac, for free.
The technical case for testing against Chrome's real accessibility tree instead of the DOM. How AX selectors work, why Playwright's getByRole is a DOM simulation, and what changes when you query the tree that screen readers actually use.
AX-native test selectors double as accessibility audits. If your app isn't accessible, your tests won't resolve. Free on Mac.
Generate browser tests with any AI model. PiperTest feeds the AX tree as plain text, AI reasons about what to test, and tests run locally on Mac.
Export PiperTest recordings to clean Playwright or Cypress code. Deterministic selector mapping, mock rendering, temporal comments. Zero vendor lock-in.
20 MCP tools for AI-driven browser testing. AI agents take AX snapshots, generate test steps, run them with self-healing, and export to Playwright. Works with Claude Code, Cursor, or any MCP client.
An honest comparison of PiperTest vs Cypress for teams frustrated by Cloud costs, no multi-tab support, Safari still experimental after 6 years, and cy.prompt() sending your page data to remote servers. Real feature comparison, MCP tool analysis, and migration path.
Stop chasing brittle CSS selectors. PiperTest targets the accessibility tree and self-heals broken selectors in 5-15ms, locally on your Mac.
You know every user flow in your app. You just can't write Playwright tests. PiperTest lets you record, replay, and export browser tests visually - no code, no IDE, no config files.
An honest comparison of PiperTest vs Playwright for teams frustrated by maintenance burden, selector fragility, and non-developer exclusion. Real AX tree, self-healing, visual format, 20 MCP tools.
Selenium still dominates browser automation with 50M monthly downloads and 10K+ job postings. But 45% of teams report constant test breakage. Here's what a modern Selenium alternative actually looks like in 2026.
Define a screenplay, let AI record and narrate your demo, and render a finished MP4. All local on Mac. No cloud, no editing timeline.
Capture any region of your Mac screen, record to MP4, export as GIF or WebP, and stream live at 30fps to AI vision models - all from a free menu bar app.
The definitive 3-way comparison of Playwright, Cypress, and PiperTest for 2026. Master comparison table, MCP tool analysis, real AX tree vs DOM injection, performance data, and honest recommendations.
How self-healing selectors actually work - three modes from 5ms AX fuzzy matching to AI-assisted repair. Technical deep-dive with comparison to Testim, mabl, Cypress, and Playwright.
Record browser tests with AX-enriched selectors that self-heal. No browser driver, no brittle CSS. Direct CDP WebSocket, structured steps, one-click export.
AI that types and talks but can't move a window or toggle dark mode. ActionPiper closes that gap with 26 action domains, voice commands, and MCP tools.
Preview full-size images and videos by hovering over thumbnails - with intelligent CDN pattern discovery, no tracking, and no data collection. For Chrome, Firefox, and Safari.
Hold a key, speak, release. Text appears at your cursor or your Mac executes a command. All local, all on Apple Neural Engine, 140ms latency.
Record and stream audio from any individual Mac app - Chrome, Spotify, Zoom - without installing virtual audio drivers. AudioPiper uses Core Audio Taps built into macOS.
See words appear on screen as people speak, processed entirely on your Mac. Streaming STT with AudioPiper and FluidAudio on the Neural Engine.
A free Mac clipboard manager with AI-powered text expansion, push-to-talk dictation, and 200+ item history - replacing three separate apps with one menu bar tool.
AI browser automation that keeps your page content local. ToolPiper's 14 CDP tools let a local LLM drive Chrome via the accessibility tree.
Search your codebase with AI using local RAG on Mac. Semantic code search, hybrid vector + keyword retrieval, all on-device. No code leaves your machine.
Run AI agents locally on your Mac with 104 MCP tools. Tool calling, multi-step reasoning, desktop control - no cloud APIs, no data leaving your machine.
Scrape JavaScript-heavy websites using a real browser, detect 16 frontend frameworks, and extract content in 7 formats. All local, no cloud APIs.
Apple Intelligence runs on the Neural Engine. Open models run on Metal GPU. ToolPiper runs both in one app with smart routing and a shared interface.
Run embedding models and vector search entirely on your Mac. Three local paths, HNSW indexing, and an OpenAI-compatible API. Zero data exposure.
Run an OpenAI-compatible API on localhost with ToolPiper. Same SDK, same code, same prompts. Just change the base URL and your LLM runs locally on Mac.
ToolPiper is a 111-tool MCP server for macOS. Local inference, browser automation, voice, vision, desktop control, and testing from one native app.
How much RAM do LLMs need on Mac? Understand Apple Silicon unified memory, model sizing, and how ToolPiper prevents out-of-memory crashes.
ChatGPT Plus costs $240/year. Local AI on your Mac costs nothing per query. Here is what you gain, what you lose, and when each makes sense.
Consistent benchmarks for local LLM speed on Mac. Token generation rates across M1 through M4 chips, with real models and real workloads.
Hundreds of models on HuggingFace, one Mac. Here's the decision tree for picking the right local LLM based on your RAM, use case, and quality needs.
How to install and use MLX-Audio for local text-to-speech on Mac - then discover ToolPiper, which bundles the same models in a native app with zero Python required.
Step-by-step guide to install and configure Ollama on Mac - then discover ToolPiper, the one-app alternative with built-in inference, voice, vision, and 41 MCP tools.
How ToolPiper's accessibility-tree-first approach to browser automation enables AI-powered test generation that works with any model provider - not just MCP-aware clients.
How ToolPiper became the first MCP server to unify LLM inference, TTS, STT, embeddings, OCR, vision, browser automation, and RAG behind a single install - all in native Swift.
A deep dive into PiperSR's double-buffered ANE+Metal pipeline that upscales 360p video to 720p at 44.4 FPS - 1.5x realtime on Apple Silicon.
Detect human poses, track skeletons, and stream motion capture data in real time - all running on your Mac's Neural Engine. No cloud, no markers, no special hardware.
Upscale video from 360p to 720p at 44 FPS on your Mac's Neural Engine - no cloud upload, no watermark, audio preserved automatically.
Index your files and ask questions about them using local AI - retrieval-augmented generation running entirely on your Mac, no documents uploaded anywhere.
Upscale photos and screenshots 2x or 4x on your Mac using CoreML super-resolution models - no upload, no API, no quality loss from compression.
Reasoning models think step by step before answering. Now they run locally on your Mac - private, free per query, with full chain-of-thought transparency.
Drop an image into ModelPiper - a vision model describes what's in it, then text-to-speech reads the description aloud. All on-device, all private.
Extract text from scanned documents, photos, and screenshots using Apple Vision OCR - on-device, fast, and private. No cloud upload required.
Select any region of your screen, ask a question, get an answer - all locally. VisionPiper captures your screen and feeds it to a vision model running on your Mac.
Clone any voice from a short audio sample, entirely on your Mac - no biometric data uploaded anywhere. Voice is too sensitive for the cloud.
Real-time speech translation running entirely on your Mac - speak English, hear Portuguese. No cloud, no Google Translate, no data leaving your machine.
Drop a meeting recording into ModelPiper and get a structured summary - decisions, action items, key points - without uploading confidential audio to any cloud service.
A full voice conversation with AI - speech-to-text, language model, text-to-speech - running entirely on your Mac. No cloud, no latency, no data leaving your machine.
Modern AI text-to-speech voices sound human - and they run locally on your Mac's GPU. No cloud service ever sees your text. Here's how to use it.
Transcribe meetings, lectures, and voice memos on your Mac with Whisper-class accuracy - entirely offline, with no audio uploaded to any server.
Run a private AI chatbot entirely on your Mac - no data leaves your machine, no API keys, no internet required. Here's how local LLM chat actually works on Apple Silicon.
Your Mac has dedicated AI hardware built in. Here's why local-first AI matters - privacy by architecture, zero API costs, no rate limits - and how ModelPiper makes it practical.