In a nutshell
- Pipelines are one or more inference tasks (e.g. Whisper, style transfer, detection) run in sequence on video frames.
- Gateways route jobs to compatible Orchestrators and workers; the protocol handles payment and coordination.
- BYOC (Bring Your Own Compute) and ComfyStream are two ways to run or extend pipelines with your own models and nodes.
Use cases
- Speech-to-text (Whisper)
- Style transfer or filters (Stable Diffusion)
- Object tracking and detection (YOLO)
- Video segmentation (segment-anything)
- Face redaction or blurring
- BYOC (Bring Your Own Compute)
What is a pipeline?
An AI pipeline consists of one or more tasks executed in sequence on live video frames. Each task may:- Modify the video (e.g. add overlays)
- Generate metadata (e.g. transcript, bounding boxes)
- Relay results to another node
Architecture
Gateway and workers
- Orchestrators queue inference jobs and run (or delegate to) workers.
- Workers subscribe to task types (e.g. whisper-transcribe) and execute them.
- Gateways route jobs from clients to compatible nodes. This is off-chain; the protocol (Arbitrum) handles payments and rewards.
Worker types
| Type | Description | Example models |
|---|
Pipeline definition format
Jobs can be JSON-based task objects. Example:- JSON-formatted tasks via the Gateway
- Frame-by-frame gRPC (low latency)
- Result upload via webhook
Bring your own compute (BYOC)
You can use your own GPU nodes to serve inference tasks:- Clone ComfyStream or implement the processing API.
- Add plugins for Whisper, ControlNet, or other models.
- Register your node with the gateway (and optionally on-chain).
See also
- BYOC — Run your own AI workers and register with the network
- ComfyStream — ComfyUI-based pipelines and Gateway integration
- Livepeer AI (overview) — Product overview and use cases
- Network technical architecture — Gateway, Orchestrator, and protocol
Resources
- ComfyStream GitHub
- Livepeer Studio AI docs
- Forum: example pipelines
- Explorer — Network and node stats