Claude Code is good at writing code. It's not good at figuring out why that code breaks at runtime. The tool that writes your functions can't see your logs, can't check your API responses, and can't trace errors across services. Not because it lacks the ability. Because it lacks the data.

There's a simple fix. Give Claude Code a place to write structured logs and a way to read them back. That's what LogPiper does, and the whole workflow is four steps.

Why Claude Code Guesses Instead of Debugging

When Claude Code writes code that fails, it reads the terminal error and tries to infer the cause. If the error is clear - a syntax error, a missing import - it fixes it immediately. Good.

But most runtime errors aren't clear. A 500 from an API. A timeout with no message. A malformed response body that crashes a downstream parser. The terminal shows a stack trace, maybe an HTTP status code, maybe nothing useful at all.

So Claude Code guesses. "Let me try a different approach." "Let me add some error handling." "Let me wrap this in a try/catch." You've watched this happen. Three rounds of Claude Code rewriting the same function, each time treating a symptom because it can't see the disease. The actual fix might be obvious - a wrong field name in the request body, a missing header, a malformed URL - but the AI doesn't have the data to know that.

A human developer in this situation would add logging. Print the request body before sending it. Print the response when it comes back. Check the actual error message, not the status code. Claude Code can do the same thing, if you give it somewhere to write those logs and a way to query them afterward.

What Is LogPiper

ToolPiper includes a logging service called LogPiper. It's two HTTP endpoints: POST /log to write an entry, GET /logs to query entries. Any process on your machine can write to it. Any process can read from it. No SDK, no client library, no authentication on the ingestion side.

If ToolPiper is running, LogPiper is running. There's nothing to turn on.

The key property that makes this useful for AI coding assistants: LogPiper is plain HTTP. Claude Code doesn't need an MCP integration, a special plugin, or a language-specific client. It can construct a curl command or a requests.post() call. The barrier is zero. The AI instruments code with log statements the same way a human developer would add console.log, except the logs are structured, queryable, and they persist across runs.

The Four-Step Debugging Workflow

Step 1: Clear the Noise

Before you start debugging, clear the old log entries so you're working with a clean buffer. Tell Claude Code:

"Clear the LogPiper logs: curl -X POST http://127.0.0.1:9998/clear"

Or run it yourself in a terminal. Either way, you start fresh. LogPiper holds up to 5,000 entries in a circular buffer, so clearing before a session means everything you see afterward is from this debugging run.

Step 2: Instrument

Tell Claude Code what you want logged and where. Something like:

"Add LogPiper logging to the data pipeline. POST errors and key state to http://127.0.0.1:9998/log with source 'my-app'. Include the request/response data in the data field."

Claude Code generates a small helper function and sprinkles log calls at the critical points. Here's what it typically produces in Python:

import requests

def log_event(level, event, message, data=None):
    try:
        requests.post("http://127.0.0.1:9998/log", json={
            "source": "my-app",
            "level": level,
            "event": event,
            "message": message,
            "data": data or {}
        }, timeout=2)
    except Exception:
        pass  # fire-and-forget

# Then at critical points:
log_event("info", "api.request", "Sending prompt to model", {
    "model": model_name,
    "messages": messages,
    "temperature": temp
})

response = client.chat.completions.create(...)

log_event("info", "api.response", "Model responded", {
    "status": "ok",
    "content_length": len(response.choices[0].message.content),
    "finish_reason": response.choices[0].finish_reason
})

Notice what's happening. The AI is writing log statements that capture the exact data a developer would want when debugging: the full request parameters going in, the response shape coming back, and the error details when something fails. The timeout=2 and bare except mean logging never blocks your application. If ToolPiper isn't running, the call fails silently and your code keeps moving.

The source field is important. It lets you filter your app's logs from the rest of the traffic on the bus. ToolPiper's own internal events (engine loads, HTTP proxy traffic, MCP tool calls) all go through the same LogPiper instance. Without a source filter, you'd be reading through entries you didn't create.

Step 3: Reproduce

Run the code. Let it fail. The logs capture everything as it happens - errors, HTTP bodies, timing, state at each step. You don't need to be watching the terminal when it breaks. LogPiper holds the entries in memory until you query or clear them.

If the bug is intermittent, run the code multiple times. Each failure adds entries to the buffer. When you're ready to investigate, the data is waiting.

Step 4: Query and Fix

Tell Claude Code to read the logs:

"Check LogPiper for errors: curl http://127.0.0.1:9998/logs?level=error&limit=20"

Claude Code runs the curl command, reads the structured JSON response, and sees the exact failure. Not a stack trace. Not a status code. The actual error data, including HTTP response bodies if the request went through ToolPiper's proxy.

The response comes back as a JSON array. Each entry has a timestamp, level, source, event, message, and a data object with whatever you logged. Claude Code parses this naturally - it's the same JSON format it works with everywhere else.

Now Claude Code fixes the code with real data instead of guessing. One cycle, not three.

What You Get That Terminal Output Doesn't Give You

Terminal output is a firehose. Everything scrolls past at once, and if you didn't have the right window open at the right time, you missed it. LogPiper gives you structure.

Filter by log level. Show only errors, hide the noise. ?level=error returns entries at error severity and above. You can also use warning, info, or debug to control the threshold.

Filter by source. Show only your app's logs, not ToolPiper internals. ?source=my-app isolates your entries from the background noise of engine lifecycle events, health checks, and internal HTTP traffic.

Filter by event type. Show only HTTP traffic, or only your custom events. ?event=api matches api.request, api.response, api.error by prefix. Event prefixes are hierarchical, so ?event=http catches http.request, http.response, and http.error in one query.

Full HTTP request and response bodies. When requests go through ToolPiper (inference, cloud proxy, MCP tools), LogPiper captures the complete request and response payloads, truncated at 8KB. The exact JSON your code sent. The exact JSON that came back. No guessing. Streaming responses (SSE, NDJSON) are detected automatically and tagged with isStreaming: true and a chunkCount.

Correlation IDs. Pass a correlationId when you log, and all entries with that ID become a single queryable group. Trace a multi-step pipeline from input to output. ToolPiper assigns correlation IDs automatically to its own workflows, so a pipeline that runs transcription, then LLM inference, then TTS will have all three stages grouped under one ID.

SSE streaming. Open curl -N http://127.0.0.1:9998/logs/stream?level=error in a terminal and watch errors appear in real time as your code runs. Claude Code can't consume SSE streams, but you can watch while Claude Code works. Useful for long-running processes where you want to see failures as they happen instead of querying afterward.

A Real Debugging Session

Here's a scenario that plays out regularly. You're building a Python script that calls a local LLM through ToolPiper's OpenAI-compatible API at http://127.0.0.1:9998/v1/chat/completions. The model loads fine. The request goes through. But the response is wrong - the summary is incoherent, or the model ignores your instructions, or it returns an empty string.

You tell Claude Code: "The summarization output is garbage. Add LogPiper logging to capture the full request and response, then run it again."

Claude Code adds log calls that capture the messages array, the model parameters, and the full response body. It also adds an error-level log in the except block to catch HTTP failures. You run the script. It produces the same bad output.

You tell Claude Code: "Query LogPiper for the recent logs: curl 'http://127.0.0.1:9998/logs?source=my-app&limit=10'"

Claude Code reads the response. The api.request entry shows the full messages array that was sent to the model. The problem is visible in the data: the system prompt reads "You are a summmary assistant" with a triple-m typo that the model interprets differently, or the user message is being double-escaped so the model sees literal \n characters instead of newlines, or the temperature is set to 2.0 instead of 0.2 because a config file had the wrong decimal. Something concrete. Something it can fix in one edit.

If the request went through ToolPiper's proxy (to a cloud API like OpenAI or Anthropic), the HTTP body capture is automatic - you don't even need to add custom logging. Query ?event=http&limit=10 and the request payload, response payload, status code, and duration are all there. This is particularly useful for debugging cloud API errors where the response body contains the actual error message that the Python client hides behind a generic exception.

Without LogPiper, Claude Code would have spent two or three rounds trying different prompt formats, changing the model, adding response parsing - all based on inference from the bad output alone. With LogPiper, it saw the actual input data and found the bug in seconds.

Putting LogPiper in Your CLAUDE.md

You don't have to tell Claude Code about LogPiper every time. Add a few lines to your project's CLAUDE.md file and it becomes part of Claude Code's standard debugging behavior:

## Debugging with LogPiper

When debugging runtime errors, use LogPiper (requires ToolPiper running):
- POST structured logs to http://127.0.0.1:9998/log with source 'my-app'
- Query errors: curl http://127.0.0.1:9998/logs?level=error&limit=20
- Clear before debugging: curl -X POST http://127.0.0.1:9998/clear
- Log request/response bodies at critical API boundaries
- Always query LogPiper before guessing at a fix

That last line is the important one. It teaches Claude Code to check the data before theorizing. The same discipline a senior engineer brings to debugging, encoded as a project instruction.

Once this is in your CLAUDE.md, you can say "debug this" and Claude Code will add logging, run the code, query the results, and diagnose the problem without you specifying the workflow each time. The four-step cycle becomes implicit.

Works in Any Language, Any Framework

LogPiper is an HTTP endpoint. The examples above are Python, but the same pattern works everywhere.

In Node.js: fetch('http://127.0.0.1:9998/log', { method: 'POST', body: JSON.stringify({...}) }). In Go: http.Post("http://127.0.0.1:9998/log", "application/json", bytes.NewBuffer(payload)). In Rust: reqwest::Client::new().post("http://127.0.0.1:9998/log"). In a shell script: curl -s -X POST http://127.0.0.1:9998/log -d '{...}'. Whatever Claude Code is writing, it can add logging to it.

The entries are JSON with a consistent shape: source, level, event, message, data. Claude Code reads this structure on query and understands it immediately. No format to learn, no output to parse. And because the data field is a freeform JSON object, you can log anything - request bodies, config state, timing measurements, environment variables, parsed responses.

What LogPiper Isn't

LogPiper is an in-memory circular buffer. 5,000 entries, oldest evicted first. It's a debugging tool, not a production observability stack. It restarts clean when ToolPiper restarts. If you need to keep logs across sessions, use POST /export to write the current buffer to a JSON file before clearing.

It only works when ToolPiper is running on your Mac. If you're debugging on a remote server or in CI, LogPiper won't help. It's a local development tool for local development problems.

Claude Code doesn't automatically check the logs. You need to tell it to query, or put the instruction in your CLAUDE.md. There's no magic feedback loop yet - the human closes the loop by prompting the query. Over time, this could become something AI coding tools do by default: check a local log bus before guessing at a fix. We think that's the direction things are heading. For now, one line in CLAUDE.md gets you most of the way there.

Try It

ToolPiper is a free download from the Mac App Store. Install it, and LogPiper is already running at http://127.0.0.1:9998. Next time Claude Code is spinning its wheels on a runtime error, try the four steps: clear, instrument, reproduce, query. You'll be surprised how often one query replaces three rounds of guessing.

This is part of the vibe debugging series on AI-assisted development. For the full picture, see Vibe Debugging: The Observability Gap in AI-Assisted Development.