- Standard API Pipelines - call a hosted endpoint, get a result. No infrastructure needed.
- ComfyStream - run ComfyUI-based workflows on live video frames in real time.
- BYOC (Bring Your Own Compute) - bring your own model container; Livepeer routes jobs to it.
Start here in 5 minutes
Choosing Your Integration Pattern
Standard API Pipelines
Standard pipelines are available via any Livepeer gateway that supports AI inference. Send a request with your model ID and parameters; get back a result.Available Pipelines
Quick Example (text-to-image)
Model selection matters. Lightning-suffix models (e.g.
RealVisXL_V4.0_Lightning) are optimized for speed - use 4-8 inference steps and guidance scale 1.0-2.0. Standard SDXL models need 20-50 steps and guidance 7.0-9.0. Check available models and warm status before selecting.Available Gateways for AI
| Gateway | Endpoint | Auth | Best For |
|---|---|---|---|
| Livepeer Studio | https://livepeer.studio/api/beta/generate | Authorization: Bearer <LIVEPEER_STUDIO_API_KEY> | Production apps |
| Cloud SPE | tools.livepeer.cloud | Provider-defined | Development and experimentation |
| Self-hosted | Your gateway URL | Authorization: Bearer <LIVEPEER_GATEWAY_API_KEY> | Custom routing, private models |
https://livepeer.studio/api/beta/generate; for Cloud SPE-managed access, check tools.livepeer.cloud for current direct API endpoint and auth requirements.
ComfyStream
ComfyStream integrates ComfyUI with the Livepeer gateway protocol to run AI pipelines on live video frames in real time. It’s the foundation of real-time AI video products like Daydream. How it works:- Video stream is ingested and split into frames
- Each frame is sent to a ComfyStream worker node
- The worker runs the ComfyUI workflow graph on the frame (style transfer, detection, etc.)
- The processed frame is returned and reassembled into an output stream
- Real-time style transfer on live streams
- Per-frame AI effects (depth estimation, face animation)
- Interactive AI art with webcam input
ComfyStream Guide
Full ComfyStream architecture, node types, and integration guide.
BYOC (Bring Your Own Compute)
BYOC lets you bring a custom model container into the Livepeer AI network. Your container receives jobs routed by gateways, executes inference, and returns results - while Livepeer handles routing, payment, and coordination. BYOC is the right path when:- Your model is fine-tuned or proprietary (not available in the standard pipeline set)
- You need a specific inference runtime (vLLM, TensorRT, custom Python)
- You want Livepeer to provide the routing and payment layer for your compute
- Expose an HTTP endpoint implementing the Livepeer AI worker API
- Accept job payloads matching the gateway’s protocol format
- Return results in the expected schema
BYOC Setup Guide
How to build, register, and deploy a BYOC container on the Livepeer network.