Skip to main content
By the end of this quickstart, you will have a ComfyStream instance running with a real-time AI effect applied to a live video feed. Once you have a working pipeline, the final section shows how to connect it to the Livepeer network.

Before You Start

ComfyStream processes live video using ComfyUI workflows on a local or cloud GPU. Choose your deployment path based on what you have available:
PathBest forRequires
RunPodFastest start, no local GPURunPod account (~$0.50/hr for A40)
DockerLocal GPU or cloud serverDocker + NVIDIA GPU, Linux
Local installExisting ComfyUI setupMiniconda, NVIDIA GPU, Linux
ComfyStream requires an NVIDIA GPU. Windows and macOS are not supported for the server component. //: # (REVIEW: Confirm Linux-only from docs.comfystream.org or Rick. The README does not explicitly state OS, but PyTorch + CUDA dependency strongly implies Linux/NVIDIA only.)
Prerequisites across all paths:
  • NVIDIA GPU with sufficient VRAM //: # (REVIEW: Verify minimum VRAM from docs.comfystream.org hardware section. Likely 12–16 GB for StreamDiffusion; 24 GB recommended for real-time performance.)
  • A modern browser (Chrome or Firefox) for the ComfyStream UI

Set Up ComfyStream

1

Deploy the RunPod template

Open the livepeer-comfystream RunPod template and select a GPU pod.For StreamDiffusion workflows, select a GPU with at least //: # (REVIEW: confirm VRAM) VRAM. An RTX A4000 or A40 is a reasonable starting point.Click Deploy.
2

Wait for the pod to start

Once the pod status shows Running, click Connect to open the pod’s exposed ports.ComfyStream exposes two ports:
  • 8188 — ComfyUI interface
  • 8889 — ComfyStream WebRTC server //: # (REVIEW: Confirm exact ports from docs.comfystream.org or docker-compose.yml in the repo. These are based on standard ComfyUI port + common ComfyStream server port.)
3

Open the ComfyStream UI

In the RunPod connect panel, open the HTTP service on port 8889. This loads the ComfyStream browser UI.You should see a camera input selector and a workflow panel.

Load a Workflow

ComfyStream uses ComfyUI workflow JSON files. The repository includes multiple starter workflows under the workflows/ directory.
1

Open the workflow panel

In the ComfyStream UI, click the workflow selector and choose a workflow file.For your first run, use the StreamDiffusion SD 1.5 workflow — it is the lightest and fastest to compile. //: # (REVIEW: Confirm the recommended starter workflow filename from the repo’s workflows/ directory. Likely something like streamdiffusion_sd15.json.)
2

Select your camera input

Choose your webcam from the camera input dropdown. The UI will request camera permission.
3

Start the pipeline

Click Run. The first run requires TensorRT compilation, which takes 2–5 minutes. Subsequent runs load immediately.You will see progress logs in the terminal where the server is running.

Verify the Pipeline is Running

Once compilation completes, the ComfyStream UI will show your webcam feed with the AI effect applied in near-real-time. Expected result: Your webcam input appears transformed by the workflow — style transfer, depth-mapped effects, or other visual processing depending on the workflow you loaded. If you see only the raw webcam feed without transformation, check:
  • GPU VRAM is not exhausted (check nvidia-smi)
  • The workflow compiled without error (check server logs)
  • The workflow nodes reference models that have been downloaded
Want to see what a running pipeline looks like before setting up? The Building Real-Time AI Video Effects with ComfyStream post includes a 5-minute demo video.

Connect to the Livepeer Network

Running ComfyStream locally gets you a working real-time AI pipeline. To deploy that pipeline on the Livepeer network — making it accessible to other applications or earning compute fees — there are two paths:
PathWhat it doesWhen to use
Daydream APIUse Livepeer Inc’s hosted ComfyStream infrastructureYou want to serve your pipeline without managing compute
BYOC workerRegister your ComfyStream instance as a go-livepeer orchestratorYou want to earn fees and run production workloads on the network
For the Daydream API, request access at daydream.live. For the BYOC path, the integration layer is PyTrickle — a Python package that enables ComfyStream to register as a Livepeer AI worker. See the BYOC documentation for setup steps.

What You Can Build

ComfyStream supports the following pipeline types in production (Phase 4, January 2026):
  • StreamDiffusion — real-time style transfer and image-to-image on live video
  • StreamDiffusion V2 — second-generation diffusion pipeline, supports video-to-video and image-to-image
  • SuperResolution — real-time video upscaling
  • AudioTranscription + SRT — real-time captions embedded in video output
  • Text data-channel output — structured text output (e.g. transcription) alongside video

Next Steps

Last modified on March 16, 2026