---
title: "ModelPiper Web App Docs — ModelPiper"
description: "Documentation for the ModelPiper visual pipeline builder web application. Block types, pipeline concepts, and workflow guide."
canonical: "https://modelpiper.com/docs/modelpiper/"
---

# ModelPiper Web App Docs — ModelPiper

> Documentation for the ModelPiper visual pipeline builder web application. Block types, pipeline concepts, and workflow guide.

# ModelPiper

A browser-based UI for building AI pipelines, chatting with models, processing images, and orchestrating workflows. Works with cloud APIs, local models, or both.

## Getting Started

ModelPiper connects to AI providers to run inference. You can get started in minutes with any of these options:

Easiest

### ToolPiper

macOS companion app that bundles local inference (llama.cpp, speech-to-text, text-to-speech, image upscaling), proxies cloud APIs securely, and manages model downloads.

1Install ToolPiper from **modelpiper.com/download**

2Launch it — ModelPiper automatically detects the connection

3Automatically get push-button templates and local model downloads

Cloud

### OpenRouter

Access 300+ models (GPT-4o, Claude, Gemini, Llama, Mistral, and more) through a single API key. Works directly in the browser — no install required.

1Create a free account at **openrouter.ai** and generate an API key

2In ModelPiper, open **My Connections** (sidebar) and click **Add Provider**

3Select **OpenRouter**, paste your API key, and pick a model

4Save the configuration — you're ready to use Pipelines and Chat

Local

### Ollama

Run open-source models entirely on your machine. Free, private, no API key needed.

1Install Ollama from **ollama.com** and pull a model (e.g. `ollama pull llama3.2`)

2Start Ollama with CORS enabled: `OLLAMA_ORIGINS=* ollama serve`

3In ModelPiper, add an **Ollama** provider — it auto-detects your models

## Connecting AI Providers

Open **My Connections** from the sidebar to manage providers. Each provider becomes a configuration you can use across Pipelines, Chat, and other features.

### Cloud Providers

**OpenRouter** works directly from the browser. Other cloud providers (OpenAI, Anthropic, Google Gemini) require the ToolPiper companion app to proxy requests securely.

### Local Providers

**Ollama** and **LM Studio** run on your machine and connect over localhost. Models stay on your hardware — nothing leaves your network.

### Custom Endpoints

Any OpenAI-compatible API can be added as a custom provider. Set the host, port, path, and auth pattern to match your server.

## Features

Pipeline Builder

Visual block-based editor. Chain LLM, audio, image, and RAG blocks together with automatic data flow between them.

Chat

Multi-model chat interface. Streaming responses, system prompts, conversation history, and provider switching mid-conversation.

Image

CoreML-powered image upscaling with side-by-side comparison. Requires ToolPiper for the upscaling backend.

Workflows

Save pipeline configurations as reusable templates. Import and export your workspace including all settings and connections.

## API Keys & Privacy

### Where are my keys stored?

API keys are encrypted with AES-GCM and stored in your browser's localStorage. When ToolPiper is connected, keys are additionally stored in the macOS Keychain. Keys never leave your device.

### What data is sent externally?

Only the prompts and parameters you send to your chosen AI provider. ModelPiper has no backend server, no telemetry, and no analytics. Your data flows directly between your browser and the provider.

## Companion Apps

ModelPiper works standalone, but companion apps unlock additional capabilities:

[

### ToolPiper

Local AI gateway — inference engines, cloud proxy, model management, speech & audio

](/docs/toolpiper)[

### VisionPiper

Screen capture, recording, GIF conversion, and live streaming

](/docs/visionpiper)[

### AudioPiper

Multi-source audio mixer — mic, system audio, and per-app capture

](/docs/audiopiper)[

### MediaPiper

Browser extension — image & video hover preview, intelligent discovery, on-device AI upscaling

](/docs/mediapiper)
