You're building something that touches three languages. A Python data pipeline feeds a Node.js API that talks to a Swift backend. Each has its own logging story. Python has the logging module. Node has winston or pino. Swift has OSLog. None of them talk to each other. When something breaks across the boundary, you're grepping three separate outputs in three separate terminals, correlating timestamps by eye, and hoping the clocks agree.
This is the normal state of cross-language development on a Mac. It shouldn't be.
Why Is Cross-Language Logging Still This Hard?
The standard answer to centralized logging is infrastructure. Elasticsearch with Kibana. Grafana with Loki. Fluentd or Logstash with agents in each process. These tools work well for production systems at scale. They are absurd for local development.
You're not running a cluster. You're running three processes on one machine. You want to see errors from all of them in one place. You don't want to write a docker-compose.yml for that. You don't want to install a logging agent into your Python virtualenv, a different one into your Node project, and a third into your Swift package dependencies. You definitely don't want to maintain configuration files that map each agent to a central collector.
The overhead scales with the number of languages in your stack. Two languages means two agents, two configs, two sets of dependencies to keep updated. Four languages means four. Every language has its own idiomatic logging library, its own format, its own opinions about structured data. Getting them to agree on a schema and ship to the same destination is a project in itself.
For production, that project is worth doing. For development, you need something that works in five minutes and doesn't touch your dependency tree.
What If Logging Was an HTTP POST?
LogPiper is a logging bus built into ToolPiper. It exposes two endpoints and a query engine. Any process on your Mac that can make an HTTP request can write structured log entries and read them back. There's nothing to install per-language, nothing to configure, and no dependency to add to any project.
The write interface is a JSON POST:
POST http://127.0.0.1:9998/log
Content-Type: application/json
{
"source": "my-app",
"level": "error",
"event": "db.connection",
"message": "Connection pool exhausted",
"data": {"activeConnections": 50, "maxPool": 50},
"correlationId": "job_20260404_001"
}The read interface is a filtered GET:
GET http://127.0.0.1:9998/logs?level=error&limit=20That's the entire integration surface. Every field except message is optional. The source field is how you identify which process logged the entry. The level field (debug, info, warn, error) controls filtering. The event field is a free-form string for categorization. The data field carries any structured payload you want. The correlationId ties related entries together across processes.
If ToolPiper is running, the logging bus is running. If ToolPiper isn't running, the POST fails silently. Your app doesn't slow down, doesn't crash, doesn't wait.
Five Languages, One Destination
The best way to show this is with code. Each example below is a complete, copy-pasteable logging function. None of them import a logging library. None of them require configuration.
Python
import requests
def log(level, event, message, data=None):
requests.post("http://127.0.0.1:9998/log", json={
"source": "data-pipeline",
"level": level,
"event": event,
"message": message,
"data": data or {}
}, timeout=2)
log("info", "pipeline.start", "Processing batch", {"rows": 50000})
log("error", "pipeline.fail", "CSV parse error", {"line": 3847, "raw": "malformed..."})The timeout=2 is the safety net. If ToolPiper is down, the call fails in 2 seconds and your pipeline keeps running. In normal operation, a localhost POST takes under a millisecond.
JavaScript (Node.js)
async function log(level, event, message, data = {}) {
fetch('http://127.0.0.1:9998/log', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({ source: 'api-server', level, event, message, data })
}).catch(() => {})
}
await log('info', 'server.start', 'Listening on port 3001')
await log('error', 'auth.fail', 'Token expired', { userId: 'u_42', expiredAt: '2026-04-03T23:59:00Z' })The .catch(() => {}) swallows failures silently. Fire-and-forget. The API server never blocks on logging.
Shell (Bash/Zsh)
log_event() {
curl -s -X POST http://127.0.0.1:9998/log \
-H "Content-Type: application/json" \
-d "{\"source\":\"deploy\",\"level\":\"$1\",\"event\":\"$2\",\"message\":\"$3\"}" \
--max-time 2 > /dev/null 2>&1 &
}
log_event info deploy.start "Deploying v2.1.0 to staging"
rsync -az ./dist/ staging:/var/www/
log_event info deploy.complete "Deploy finished"The trailing & sends the curl to the background. Your deploy script doesn't wait for the log call to finish. The --max-time 2 kills the request if ToolPiper isn't reachable.
Swift
func log(_ level: String, event: String, message: String) {
var request = URLRequest(url: URL(string: "http://127.0.0.1:9998/log")!)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.httpBody = try? JSONSerialization.data(withJSONObject: [
"source": "backend", "level": level, "event": event, "message": message
])
request.timeoutInterval = 2
URLSession.shared.dataTask(with: request) { _, _, _ in }.resume()
}
log("info", event: "model.load", message: "Loading llama-3.2-3b")
log("error", event: "model.crash", message: "llama-server exited with code 137")The dataTask callback is empty. The Swift app fires the request and moves on. No delegate, no response handling, no retry logic.
Go
func logEntry(level, event, message string) {
body, _ := json.Marshal(map[string]string{
"source": "worker", "level": level, "event": event, "message": message,
})
client := &http.Client{Timeout: 2 * time.Second}
go client.Post("http://127.0.0.1:9998/log", "application/json", bytes.NewReader(body))
}The go keyword sends it to a goroutine. The calling function doesn't block.
Every one of these follows the same pattern. Build a JSON body, POST it, don't wait for the response. The language is different. The logging interface is identical.
Querying Across All Sources
Once your processes log to a shared endpoint, the query side is where the value compounds. Every entry from every source lands in the same 5,000-entry circular buffer.
# All errors from any source
curl "http://127.0.0.1:9998/logs?level=error&limit=20"
# Only the Python pipeline
curl "http://127.0.0.1:9998/logs?source=data-pipeline&limit=50"
# Only HTTP traffic
curl "http://127.0.0.1:9998/logs?event=http&limit=100"
# Everything from one job execution
curl "http://127.0.0.1:9998/logs?correlationId=batch_20260404"The source filter isolates one process. The level filter shows only errors and above. The event filter matches by prefix, so event=pipeline catches pipeline.start, pipeline.fail, and pipeline.complete. The correlationId filter pulls every entry that's part of the same logical operation, regardless of which process logged it.
The query that matters most for cross-app debugging is the one that cuts across sources. Your Node API returned a 502. Was it because the Swift backend crashed? Was it because the Python pipeline fed it bad data? Query level=error&limit=20 and the answer is in chronological order, with entries from all three processes interleaved by timestamp. No terminal switching. No grep. One query.
Tracing a Request Across Process Boundaries
The correlationId field is how you trace a single logical operation through multiple processes. Assign a shared ID at the start of a job, pass it through your systems, and include it in every log entry.
# Python pipeline starts the job
log("info", "pipeline.start", "Processing batch", {
"correlationId": "batch_20260404", "rows": 50000
})
# Node API receives the result
await log('info', 'api.ingest', 'Received pipeline output', {
correlationId: 'batch_20260404', recordCount: 49823
})
# Swift backend stores it
log("info", event: "store.write", message: "Wrote 49823 records to index")
// (with correlationId in the data payload)Later, when something went wrong:
curl "http://127.0.0.1:9998/logs?correlationId=batch_20260404"You get the complete timeline. Pipeline started at 14:02:03. API received partial data at 14:02:47 (177 records missing). Backend wrote what it received. The gap is visible because the entries are structured and ordered. The pipeline's data field shows 50,000 rows in. The API's shows 49,823 received. Something dropped 177 records between the pipeline and the API. Now you know where to look.
Real-Time Streaming
For live debugging, LogPiper supports Server-Sent Events. Open a connection and matching entries appear the moment they're ingested:
# Stream all errors as they happen
curl -N "http://127.0.0.1:9998/logs/stream?level=error"
# Stream from one specific source
curl -N "http://127.0.0.1:9998/logs/stream?source=api-server"Run this in one terminal while your processes run in others. Errors from the Python pipeline, the Node API, and the Swift backend all appear in the same stream, labeled by source, the instant they're logged. No polling. No delay.
The SSE stream accepts the same filters as the query endpoint. Watch only errors. Watch only one source. Watch only events with a specific prefix. The stream shows you what's happening right now across your entire stack.
What You Don't Need
This is the part worth emphasizing, because the absence of things is the feature.
No SDK or client library per language. Every language already has HTTP. You're not adding a dependency. You're not importing a package. You're not pinning a version.
No agent process to install and configure. There's no collector, no forwarder, no sidecar. LogPiper runs inside ToolPiper, which you already have if you're doing local AI work on a Mac.
No Docker, Elasticsearch, Kibana, Loki, or Grafana. None of that infrastructure makes sense for local development logging. LogPiper is an in-memory buffer with an HTTP interface.
No YAML configuration files. No log format negotiation. No schema registration. No output plugin configuration. No rotation policy. No alerting rules.
No account creation. The ingestion and query endpoints are unauthenticated. Any local process can write and read without credentials. The SSE stream and management endpoints (clear, export) require ToolPiper's session key, which is auto-generated on launch.
HTTP POST. JSON body. Done.
What LogPiper Is Not
LogPiper stores 5,000 entries in a circular buffer. When the buffer is full, the oldest entry gets evicted. There's no disk persistence by default. You can export the current buffer to a JSON file with POST /export, but this is on-demand, not automatic.
It's localhost only. Your processes have to be on the same machine. This is a local development debugging tool, not a distributed tracing system.
It's not built for high-volume sustained logging. A build system that generates 10,000 log lines per second will fill and rotate the buffer in under a second. For development workflows where you're logging meaningful events at API boundaries and decision points, 5,000 entries covers hours of work.
It won't replace Datadog. It won't replace Sentry. It fills a gap those tools don't target: the local, multi-language, multi-process debugging session where you want one place to look and zero infrastructure to manage.
The Pattern
The pattern that makes this work is simple enough to memorize.
Every process POSTs to the same endpoint. The source field identifies who sent the entry. Use your app name, your script name, your service name. Pick something you'll recognize when you're scanning entries at 2 AM.
Every log call is fire-and-forget. 2-second timeout. Background thread or goroutine. Empty error handler. Your app never waits on logging. If ToolPiper is down, the call fails silently. You can leave these calls in your code permanently, in development and staging, with zero risk to performance or stability.
Query when something breaks. level=error for the quick scan. source=my-app to isolate one process. correlationId=job_123 to trace a full operation. event=http for API traffic. The query parameters compose, so source=api-server&level=error shows only errors from the API server.
The integration is one function per language. The function is 5-10 lines. It uses your language's standard HTTP library. It has no external dependencies. Copy it into your project once and it works forever.
ToolPiper is a free download from the Mac App Store. LogPiper is included in every installation, free and Pro.
This is part of the vibe debugging series on AI development observability. For the full LogPiper technical reference, see LogPiper: A Universal Logging Bus That Ships Free Inside ToolPiper.