Every cloud image upscaler works the same way. You upload your photo to someone else's server. Their GPU processes it. They send the result back — maybe with a watermark, maybe behind a paywall, always after compressing it through their pipeline. Your original image, in full resolution, now lives on a server you don't control.
For personal photos, that's a privacy tradeoff most people don't think about. For client work — product photography, medical imaging, legal evidence, architectural plans — uploading to a third-party service may violate confidentiality agreements or regulatory requirements. And for screenshots and UI mockups, the round-trip compression often degrades the very detail you're trying to enhance.
Your Mac can do this locally. The Neural Engine and GPU in Apple Silicon are purpose-built for exactly this kind of matrix computation. A super-resolution model running on your hardware produces the same quality as cloud services, in less time, with zero data leaving your machine.
What is AI image upscaling?
Traditional upscaling (bicubic interpolation) just guesses pixel values between existing pixels. The result is blurry — it adds resolution without adding detail. AI super-resolution models are trained on millions of image pairs to predict what high-resolution detail should exist in a low-resolution input. They reconstruct textures, sharpen edges, and recover fine detail that interpolation can't.
The models used for this are small — under 20MB — but the computation is intensive. Each pixel in the output is predicted from a neighborhood of input pixels through multiple neural network layers. A 4x upscale of a 1000x1000 image produces a 4000x4000 output — 16 million pixels, each individually predicted.
Apple Silicon makes this practical on a laptop. The Neural Engine handles the matrix multiplications, the GPU handles the pixel format conversions, and unified memory means the image data doesn't need to be copied between processors. A typical upscale completes in 1–5 seconds depending on input size.
Why local matters for image upscaling
Your images stay on your disk. Client photos, medical scans, legal documents, proprietary designs — none of it touches a network. There's no upload, no server-side processing, no third-party data retention policy to worry about.
No compression artifacts. Cloud services compress uploads and downloads to save bandwidth. When you're upscaling to recover detail, re-compression defeats the purpose. Local processing works with your original file and produces an uncompressed output.
No per-image cost. Cloud upscaling services charge per image — $0.05 to $0.50 each, with monthly limits on free tiers. Local upscaling costs nothing per image. Process a thousand photos in a batch and the only cost is electricity.
No watermarks, no account, no waitlist. Cloud services gate quality behind subscriptions and plaster watermarks on free-tier output. Locally, you get the full-quality result immediately.
Works offline. Upscale photos on a plane, at a client site with no Wi-Fi, or anywhere else. The models are bundled — no download required.
What You Need
You don't need: A terminal. Python. Docker. An API key. A subscription to Topaz or Gigapixel. A dedicated GPU.
You do need: A Mac with Apple Silicon (M1 or later) and at least 8GB of RAM. That's it.
The models
ToolPiper bundles two super-resolution models, both CoreML-optimized for Apple Silicon:
PurePhoto SPAN 4x — a 16-layer residual network with attention, processing 256×256 pixel tiles. This is the default for image upscaling. It quadruples resolution: a 1000×1000 photo becomes 4000×4000. The model handles photos, illustrations, screenshots, and scanned documents well. It's particularly strong on photographic content where texture recovery matters — skin detail, fabric, foliage.
PiperSR 2x — our own 453K-parameter model, built for speed. It doubles resolution using 128×128 tiles and 6 residual blocks with 64 channels. PiperSR is faster than SPAN 4x and produces clean results on screenshots, UI elements, and text-heavy images where a 2x upscale is sufficient. It's also the model behind the 44 FPS video upscale pipeline.
Both models are bundled inside the app — no download step, no waiting. They're always ready.
The ModelPiper Workflow
Load the Local Image Upscale template. The pipeline has three nodes: an image input, the upscale processor, and the output display.
Drop an image onto the input node — drag from Finder, paste from clipboard, or use the file picker. The upscale runs immediately. The result appears in the output node at 2x or 4x resolution, ready to save.
The default model is PurePhoto SPAN 4x. Switch to PiperSR 2x from the model dropdown if you want faster processing or only need a 2x upscale. Both models accept any image format (PNG, JPEG, WebP) up to 8192×8192 pixels.
For batch processing, the upscale is also available via the /v1/images/upscale REST endpoint and the image_upscale MCP tool — send a base64-encoded image and get the upscaled PNG back.
When cloud upscaling is still better
Dedicated cloud services like Topaz Gigapixel have larger models with more parameters, trained on specific domains (faces, text, anime). For professional photo restoration or forensic-level enhancement, those specialized models can outperform general-purpose super-resolution on their target domain. The tradeoff is price ($99+ one-time or subscription), upload requirements, and processing time.
For everyday upscaling — product photos, screenshots, scanned documents, social media images, UI assets — the bundled models produce excellent results with zero friction.
Try It
Download ModelPiper, install ToolPiper, and load the Local Image Upscale template. Drop an image, see the result. Both models are bundled — there's nothing to download or configure.
This is part of a series on local-first AI workflows on macOS. See also: Video Upscale — the same technology applied to video at 44 FPS.