Before You Start
ComfyStream processes live video using ComfyUI workflows on a local or cloud GPU. Choose your deployment path based on what you have available:| Path | Best for | Requires |
|---|---|---|
| RunPod | Fastest start, no local GPU | RunPod account (~$0.50/hr for A40) |
| Docker | Local GPU or cloud server | Docker + NVIDIA GPU, Linux |
| Local install | Existing ComfyUI setup | Miniconda, NVIDIA GPU, Linux |
- NVIDIA GPU with sufficient VRAM //: # (REVIEW: Verify minimum VRAM from docs.comfystream.org hardware section. Likely 12–16 GB for StreamDiffusion; 24 GB recommended for real-time performance.)
- A modern browser (Chrome or Firefox) for the ComfyStream UI
Set Up ComfyStream
- RunPod (fastest)
- Docker (local or cloud server)
- Local install
Deploy the RunPod template
Open the livepeer-comfystream RunPod template and select a GPU pod.For StreamDiffusion workflows, select a GPU with at least //: # (REVIEW: confirm VRAM) VRAM. An RTX A4000 or A40 is a reasonable starting point.Click Deploy.
Wait for the pod to start
Once the pod status shows Running, click Connect to open the pod’s exposed ports.ComfyStream exposes two ports:
8188— ComfyUI interface8889— ComfyStream WebRTC server //: # (REVIEW: Confirm exact ports from docs.comfystream.org or docker-compose.yml in the repo. These are based on standard ComfyUI port + common ComfyStream server port.)
Load a Workflow
ComfyStream uses ComfyUI workflow JSON files. The repository includes multiple starter workflows under theworkflows/ directory.
Open the workflow panel
In the ComfyStream UI, click the workflow selector and choose a workflow file.For your first run, use the StreamDiffusion SD 1.5 workflow — it is the lightest and fastest to compile.
//: # (REVIEW: Confirm the recommended starter workflow filename from the repo’s workflows/ directory. Likely something like
streamdiffusion_sd15.json.)Select your camera input
Choose your webcam from the camera input dropdown. The UI will request camera permission.
Verify the Pipeline is Running
Once compilation completes, the ComfyStream UI will show your webcam feed with the AI effect applied in near-real-time. Expected result: Your webcam input appears transformed by the workflow — style transfer, depth-mapped effects, or other visual processing depending on the workflow you loaded. If you see only the raw webcam feed without transformation, check:- GPU VRAM is not exhausted (check
nvidia-smi) - The workflow compiled without error (check server logs)
- The workflow nodes reference models that have been downloaded
Want to see what a running pipeline looks like before setting up? The Building Real-Time AI Video Effects with ComfyStream post includes a 5-minute demo video.
Connect to the Livepeer Network
Running ComfyStream locally gets you a working real-time AI pipeline. To deploy that pipeline on the Livepeer network — making it accessible to other applications or earning compute fees — there are two paths:| Path | What it does | When to use |
|---|---|---|
| Daydream API | Use Livepeer Inc’s hosted ComfyStream infrastructure | You want to serve your pipeline without managing compute |
| BYOC worker | Register your ComfyStream instance as a go-livepeer orchestrator | You want to earn fees and run production workloads on the network |
What You Can Build
ComfyStream supports the following pipeline types in production (Phase 4, January 2026):- StreamDiffusion — real-time style transfer and image-to-image on live video
- StreamDiffusion V2 — second-generation diffusion pipeline, supports video-to-video and image-to-image
- SuperResolution — real-time video upscaling
- AudioTranscription + SRT — real-time captions embedded in video output
- Text data-channel output — structured text output (e.g. transcription) alongside video