Orchestrator Layer Context
The Livepeer network has three functional layers. Orchestrators sit at the compute layer, receiving jobs from Gateways and executing them on GPU hardware. Orchestrators are the only participants that interact with all layers below the Gateway - accepting jobs from the Gateway layer, executing them at the compute layer, and transacting with the protocol layer for staking, rewards, and payment ticket redemption.Orchestrator Interactions
An Orchestrator interacts with three categories of actor.Gateways
Gateways are the Orchestrator’s job sources. Every job an Orchestrator executes arrives from a Gateway.- The Gateway establishes a session with the Orchestrator, agreeing on price and verifying capability
- Jobs arrive as segments (video) or HTTP requests (AI inference), each with a probabilistic micropayment ticket attached
- The Orchestrator processes the job, returns the result, and accumulates payment tickets
- Slow or failing responses push the Orchestrator down future Gateway rankings
GPU Workers
The Orchestrator process coordinates two types of GPU worker:- Transcoder - handles video transcoding. Receives raw segments, applies output profiles (resolution,
bitrate, codec), returns encoded segments. Runs either in-process or as a separate
transcoderprocess in an O-T split configuration. - AI Runner - handles AI inference. Receives inference requests, routes them to the appropriate loaded model (or loads the model on demand), and returns results. Runs as a Docker container managed by the Orchestrator process.
Arbitrum Protocol
Orchestrators interact with four Arbitrum smart contracts:The AIServiceRegistry contract (
0x04C0b249740175999E5BF5c9ac1dA92431EF34C5 on Arbitrum Mainnet)
is separate from the primary ServiceRegistry and is used specifically for AI subnet registration.
Enable it with the -aiServiceRegistry flag. The contract is currently detached from the main
protocol controller - confirm the current integration status with your setup guide.Dual Pipeline Architecture
The Orchestrator node (LivepeerNode in go-livepeer) runs two independent processing pipelines - one
for video transcoding and one for AI inference. Both pipelines are active simultaneously in a
dual-workload configuration.
Video vs AI Pipelines
Request Flow
This is what happens when a Gateway sends a job to an Orchestrator, from receipt through result delivery and payment accumulation.Lifecycle Steps
- Job arrives - The Gateway sends a video segment or AI request with an attached probabilistic micropayment ticket.
- Ticket verification - The Orchestrator verifies the ticket is valid (correct signer, sufficient face value, within expected value range).
- Pipeline routing - The Orchestrator routes the job to the video pipeline (transcoder) or the AI pipeline (AI runner) based on the job type.
- Execution - The worker processes the job. For video: transcodes the segment to all requested output profiles. For AI: runs inference against the loaded model.
- Result return - The Orchestrator returns the result to the Gateway over HTTP.
- Ticket accumulation - The ticket is stored. At the protocol level, each ticket has a probability of being a “winning” ticket - the Orchestrator redeems only winning tickets on-chain.
- On-chain redemption - Winning tickets are submitted to the TicketBroker contract on Arbitrum, releasing ETH to the Orchestrator.
Setup Configurations
The go-livepeer binary supports three physical layouts that map to different hardware scale and operating requirements.- Combined (solo)
- O-T Split
- Pool Operator
The Orchestrator and Transcoder run as a single process on one machine. This is the default
configuration for most solo operators.
- Simple to operate and monitor
- Transcoder worker runs in-process (same machine)
- AI Runner containers managed by the same process
- Suitable for single-GPU setups
Combined single-node startup
Software Components
go-livepeer
The core node software. When started with-orchestrator, it runs as the Orchestrator controller.
Handles:
- Session management with Gateways (negotiation, job receipt, result return)
- Worker coordination (in-process transcoder or external transcoder via
-orchAddr) - AI Runner container management (
-aiWorker,-aiModels,-aiModelsDir) - Payment ticket accumulation and on-chain redemption
- Prometheus metrics (port 7935 by default)
- Protocol interactions (reward calls, stake management)
AI Runner
A Docker container that handles AI inference workloads. The go-livepeer Orchestrator process spawns and manages AI Runner containers for each configured pipeline and model. Containers start on demand, and operators should confirm the current warm-state behaviour in the release they deploy. The-aiModels flag specifies which pipelines and models to load on startup:
Example -aiModels startup flag
livepeer_cli
A command-line tool that connects to a running Orchestrator node. Used for:- Activating the Orchestrator on-chain
- Configuring reward cut and fee cut
- Setting price per unit and price per Gateway
- Viewing node status, connected Gateways, and current earnings
Arbitrum Contracts
See for deployed addresses on Arbitrum Mainnet and Arbitrum Sepolia (testnet).Related Pages
Orchestrator Role
What Orchestrators are and how the role has evolved.
Orchestrator Capabilities
Workload types, pipeline support, and Gateway selection factors.
Incentive Model
Revenue streams, cost structure, and why operating an Orchestrator earns.
Payment Receipts
How probabilistic micropayment tickets reach the Orchestrator and are redeemed.