Skip to main content
Key Decisions for Orchestrator Setup
  1. Setup Type - On-chain or off-chain
  2. Setup Path - What software is installed
  3. Operational Mode - Whether you control and handle all operational requirements or delegate them
  4. Workload Mode - What compute job workloads the node processes
This page acts as a guide to finding the Orchestrator setup path that matches your operational aims. covers the deployment options available for Orchestrators by category and their focus:
  1. Software Setup Options:
  • go-livepeer Livepeer Software
  • Siphon Ecosystem Software
  1. Workload Setup Options
  • AI
  • Video
  • Video and Transcoding
  • Dual

Deployment Types

A pool node is not a pool operator. A pool node joins someone else’s pool and contributes GPU compute. A pool operator runs the orchestrator that accepts external workers. These are different deployment types with different requirements.

Deployment Considerations

The standard path. A single go-livepeer process on one machine handles protocol operations, job routing, and GPU work.The operator controls: pricing, stake, workloads, reward calling, uptime - everything.

Setup Guide

Install, configure, connect, and verify.

Decision Tree

Workload Mode

Deployment type and workload mode are independent decisions. Any deployment type above can run any workload mode below. Dual mode is the most common production configuration. NVENC/NVDEC (video) use dedicated silicon that does not compete with CUDA cores (AI). Both workloads share VRAM. A 24 GB GPU supports video transcoding alongside one warm AI model. For full dual mode setup instructions, see . For a detailed breakdown of all AI pipeline types, VRAM requirements, and demand data, see .

Next Steps

Last modified on March 16, 2026