Page is under construction.
Check the github issues for ways to contribute! Or provide your feedback in this quick form
Check the github issues for ways to contribute! Or provide your feedback in this quick form
Key Capabilities:
- Real-time video processing at 15-30 FPS
- WebRTC-based streaming for low latency
- ComfyUI workflow compatibility for flexible AI pipelines
- TensorRT acceleration for 10x+ performance improvements
- Multiple deployment modes (Docker, cloud, local development)
- Bring Your Own Compute (BYOC) orchestration support
Primary Use Cases:
- Live AI video effects (style transfer, depth estimation, face animation)
- Real-time image-to-image translation on video streams
- Interactive AI art generation with webcam input
- Distributed GPU compute for video processing
Architecture
ComfyStream is organized into six primary architectural layers, each with distinct responsibilities:Layer Responsibilities:
| Layer | Components | Primary Function |
|---|---|---|
| Client | Browser, Webcam | Capture media input and display output |
| UI | StreamCanvas, Room, Settings | Video standardization, WebRTC setup, configuration |
| Transport | RTCPeerConnection, MediaTracks | Real-time media streaming with WebRTC |
| Server | app.py, byoc.py | WebRTC signaling, media track handling, orchestration |
| Processing | Pipeline, ComfyStreamClient | Frame-to-tensor conversion, workflow execution coordination |
| Backend | ComfyUI, Custom Nodes | Workflow graph execution, AI model inference |