Skip to main content
Orchestrators can run AI inference as well as video transcoding. AI jobs are routed by capability, price, and latency, not by stake. You need strong GPUs (e.g. 16GB+ VRAM), Docker, CUDA 12.x, and typically an activated orchestrator.

Enable AI

Start go-livepeer with -enableAI and set your Arbitrum RPC and other flags. Configure aiModels.json to declare pipelines and models (see Configure your orchestrator). Set per-model or per-pipeline pricing so gateways can route to you.

BYOC and ComfyStream

BYOC and ComfyStream let you run custom AI workloads. See the Developers section (BYOC, ComfyStream) for pipeline design. Your orchestrator advertises capability and price; gateways send matching jobs.

Economics

Stake does not decide AI job assignment. Revenue is from per-job payments. Optimise reliability and latency to maximise AI earnings.

See also

Last modified on February 18, 2026