You install ToolPiper for the local LLM inference, or the MCP tools, or the browser automation. What you get for free is a logging service that quietly solves a problem every developer building with local AI has: where did things go wrong, and when?

LogPiper is a real-time logging bus built into ToolPiper. Any process on your machine - a Python script, a shell command, an Angular app, a browser extension, another Swift app - can write structured log entries to it over HTTP. Any process can query those logs, stream them in real-time, or export them to disk. There's nothing to install, nothing to configure, and no account to create. If ToolPiper is running, LogPiper is running.

The Problem With Local AI Debugging

When you're building with cloud APIs, observability is someone else's problem. OpenAI gives you a dashboard. Anthropic gives you a console. The request leaves your machine, hits an endpoint, and the provider's infrastructure handles logging, metrics, and error reporting.

Local AI flips this. The model runs on your machine. The inference server runs on your machine. The API gateway, the proxy, the embedding engine, the TTS pipeline - all local. When something fails, there's no dashboard to check. You're left grepping through multiple terminal windows, correlating timestamps by hand, and guessing which component dropped the ball.

This gets worse when you have multiple local apps collaborating. ToolPiper runs inference. VisionPiper captures screens. AudioPiper records audio. ModelPiper orchestrates pipelines. A browser extension detects image sources. Each has its own stdout, its own error handling, its own log format. The failure you're debugging might span three of them.

How LogPiper Works

LogPiper is two HTTP endpoints and a query engine. That's it.

Write a Log Entry

curl -X POST http://127.0.0.1:9998/log \
  -H "Content-Type: application/json" \
  -d '{
    "source": "my-script",
    "level": "info",
    "event": "pipeline.start",
    "message": "Starting data processing",
    "data": {"inputFile": "dataset.csv", "rows": 50000},
    "correlationId": "job_abc123"
  }'

That's a fire-and-forget POST. The response is {"ok": true}. If ToolPiper isn't running, the request fails silently after 2 seconds - it never blocks your app.

Need to send multiple entries at once? Batch them:

curl -X POST http://127.0.0.1:9998/logs \
  -H "Content-Type: application/json" \
  -d '[{"source": "my-script", "level": "info", ...}, ...]'

Query Logs

# Recent errors from any source
curl "http://127.0.0.1:9998/logs?level=error&limit=20"

# All HTTP traffic
curl "http://127.0.0.1:9998/logs?event=http&limit=100"

# Logs from a specific app
curl "http://127.0.0.1:9998/logs?source=my-script&limit=50"

# Everything related to one job
curl "http://127.0.0.1:9998/logs?correlationId=job_abc123"

Queries filter by level (minimum severity), source, event prefix, correlation ID, and time range. Results come back as JSON arrays sorted by timestamp.

Stream in Real-Time

# Stream all errors as they happen (SSE)
curl -N "http://127.0.0.1:9998/logs/stream?level=error"

# Stream events from a specific source
curl -N "http://127.0.0.1:9998/logs/stream?source=my-script"

The stream endpoint uses Server-Sent Events. Open it in one terminal while your app runs in another. Every matching log entry appears the moment it's ingested. No polling, no delay.

What Makes It Useful

Full HTTP Body Capture

This is the feature that saves the most debugging time. When any request flows through ToolPiper - LLM inference, cloud API proxy, model downloads, MCP tool calls - LogPiper captures the full request and response bodies, truncated at 8KB.

// What you see in the logs:
{
  "event": "http.request",
  "data": {
    "url": "/v1/chat/completions",
    "method": "POST",
    "requestBody": "{\"model\":\"llama-3.2-3b\",\"messages\":[...]}"
  }
}
{
  "event": "http.response",
  "data": {
    "url": "/v1/chat/completions",
    "status": 200,
    "responseBody": "{\"choices\":[{\"message\":{\"content\":\"...\"}}]}",
    "durationMs": 1250
  }
}

When a cloud API returns a 400 error, you don't have to guess what you sent. The request body is right there. When a model returns unexpected output, you can see the exact prompt that produced it. When a proxy request fails, you see both what was sent and what came back.

Streaming responses (SSE, NDJSON) are detected automatically and tagged with isStreaming: true and a chunkCount.

Correlation IDs

When ToolPiper executes a workflow - say, a pipeline that transcribes audio, summarizes the text, and speaks the result - every log entry in that chain shares a correlationId like exec_abc12345. Query by that ID and you get the complete timeline of a single operation across all components.

Your own apps can use this too. Pass a correlationId when you log, and all entries with that ID become a queryable group. This is how you trace a multi-step job through different services without manually correlating timestamps.

Engine Lifecycle Events

ToolPiper manages inference engines (llama-server processes, TTS backends, STT backends). LogPiper captures their full lifecycle:

  • engine.load - model loaded, which backend, memory usage
  • engine.unload - model unloaded, reason
  • engine.crash - process died, exit code, last stderr
  • engine.restart - automatic restart triggered

If your local model stops responding mid-conversation, query event=engine and you'll know immediately whether the engine crashed, ran out of memory, or was unloaded by the resource scheduler.

60+ Event Types

LogPiper uses a prefix-based event taxonomy. Query by prefix and you get all subtypes:

PrefixEventsWhat It Covers
http.*request, response, error, cancel, streamAll HTTP traffic with full payloads
engine.*load, unload, crash, restartInference engine lifecycle
workflow.*start, complete, error, cancelPipeline orchestration
tool.*start, stop, output, crash, restartBackend tool processes
download.*start, complete, error, cancelModel downloads from HuggingFace
sse.*connect, disconnect, broadcast, errorServer-Sent Event connections
mcp.*tool complete, tool errorMCP tool execution
app.*launch, shutdown, crash, errorApplication lifecycle

The event field is a free-form string. Your own apps can use any prefix scheme you want - myapp.auth.login, myapp.queue.stall, whatever makes sense for your domain.

Why This Is Better Than Console Logs

Console logs are write-only. You print them, they scroll past, and they're gone. If you didn't have the right terminal open at the right time, you missed it.

LogPiper is write-once, query-many. Every entry lands in an in-memory buffer (5,000 entries, circular) that supports filtered queries, time-range lookups, and real-time subscriptions. The data is structured - level, source, event, correlation ID - so you can slice it precisely instead of grepping through unstructured text.

The key difference: you can query logs after the fact without having been watching when they happened. Your script crashed at 3 AM? Query errors from the last hour. A user reports a bug? Ask them to export their logs. An MCP tool returned garbage? Check event=http for the exact API payloads.

Integration Is One HTTP Call

There's no SDK to install, no client library to import, no dependency to manage. If your language can make an HTTP POST, it can log to LogPiper.

From Python

import requests

def log(level, event, message, data=None, correlation_id=None):
    requests.post("http://127.0.0.1:9998/log", json={
        "source": "my-python-app",
        "level": level,
        "event": event,
        "message": message,
        "data": data or {},
        "correlationId": correlation_id
    }, timeout=2)

log("info", "training.epoch", "Epoch 5 complete", {"loss": 0.023, "lr": 1e-4})

From a Shell Script

log_event() {
  curl -s -X POST http://127.0.0.1:9998/log \
    -H "Content-Type: application/json" \
    -d "{\"source\":\"deploy\",\"level\":\"$1\",\"event\":\"$2\",\"message\":\"$3\"}" \
    --max-time 2 > /dev/null 2>&1 &
}

log_event info deploy.start "Deploying v2.1.0 to staging"
rsync -az ./dist/ staging:/var/www/
log_event info deploy.complete "Deploy finished"

From JavaScript (Node or Browser)

fetch('http://127.0.0.1:9998/log', {
  method: 'POST',
  headers: {'Content-Type': 'application/json'},
  body: JSON.stringify({
    source: 'my-app',
    level: 'error',
    event: 'payment.failed',
    message: 'Stripe returned 402',
    data: {customerId: 'cus_123', amount: 4999}
  })
}).catch(() => {}) // fire-and-forget

From Any MCP Tool

If you're using ToolPiper's MCP tools from Claude Code, Cursor, or any MCP-compatible AI assistant, every tool call is automatically logged. You get the full input parameters, the execution duration, and the output - without writing any logging code. The AI's tool usage becomes a queryable audit trail.

The Debugging Workflow

Here's how to use LogPiper when something breaks. This works whether you're debugging your own app, a ToolPiper feature, or an MCP tool interaction.

# 1. Clear old noise
curl -X POST http://127.0.0.1:9998/clear

# 2. Reproduce the problem
# (run your app, trigger the bug, execute the MCP tool, etc.)

# 3. Check what happened
curl "http://127.0.0.1:9998/logs?level=error&limit=20"

# 4. If it's an API issue, check the HTTP traffic
curl "http://127.0.0.1:9998/logs?event=http&limit=50"

# 5. If you need everything, export to a file
curl -X POST http://127.0.0.1:9998/export
# Writes JSON to ~/Library/Application Support/ToolPiper/exports/

The clear-reproduce-query cycle takes seconds. No log file to find, no grep to construct, no timestamps to correlate. And because LogPiper captures HTTP bodies, you often find the root cause on the first query - the error message is right there in the responseBody field.

Designed to Disappear

LogPiper has no UI in ToolPiper. There's no log viewer, no dashboard, no configuration panel. This is intentional. The interface is HTTP - the same interface your apps already speak. Query it from curl, from a script, from your AI assistant, from a monitoring tool. Pipe the SSE stream into jq. Build a custom dashboard if you want one. The data is yours.

The ingestion endpoints (/log and /logs) and the query endpoint (GET /logs) are unauthenticated. No session key, no token, no handshake. This means any local process can log and query without credential management. The stream and management endpoints require ToolPiper's session key, which is auto-generated on launch and stored at a known path for scripts that need it.

The buffer is in-memory (5,000 entries, oldest evicted first). LogPiper is a debugging tool, not an archival system. It's always fast, never fills your disk, and restarts clean. If you need persistence, export before clearing.

What Developers Are Doing With It

  • AI pipeline debugging: Logging each step of a multi-model pipeline (transcribe → summarize → speak) with a shared correlation ID, then querying the full chain when output quality drops
  • MCP tool audit trails: Reviewing exactly what parameters an AI assistant sent to a tool and what came back - useful for prompt tuning and tool schema refinement
  • CI/CD visibility: Shell scripts logging deploy events to LogPiper, then querying for errors if the deploy fails - all without setting up a dedicated logging service
  • Cross-app debugging: Tracing a request from the Angular web app through the ToolPiper proxy to a cloud API and back, with full payloads at every hop
  • Model performance tracking: Logging inference duration and token counts per request, then querying to compare performance across models or quantization levels

The Logging Endpoint You Already Have

If ToolPiper is installed on your Mac, LogPiper is already running. There's nothing to enable, nothing to configure, and nothing to pay for. Every ToolPiper installation - free or Pro - includes the full logging bus.

The next time you're debugging a local AI workflow, an MCP tool interaction, or a multi-app integration, skip the console.log archaeology. POST to http://127.0.0.1:9998/log and query http://127.0.0.1:9998/logs. The data is structured, the queries are instant, and the HTTP bodies are already captured.

LogPiper won't replace your production observability stack. It's not trying to. It's the local debugging companion that makes building with ToolPiper - and building anything on your Mac - faster to diagnose when things go sideways.

ToolPiper is a free download from the Mac App Store.