Skip to main content
Orchestrators sit between Gateways and the Arbitrum protocol layer. They receive jobs, coordinate GPU workers, return results, and handle the staking, registry, and payment flows that keep the node active. For what Orchestrators do, see . For why you would run one, see .

Orchestrator Layer Context

The Livepeer network has three functional layers. Orchestrators sit at the compute layer, receiving jobs from Gateways and executing them on GPU hardware. Orchestrators are the only participants that interact with all layers below the Gateway - accepting jobs from the Gateway layer, executing them at the compute layer, and transacting with the protocol layer for staking, rewards, and payment ticket redemption.

Orchestrator Interactions

An Orchestrator interacts with three categories of actor.

Gateways

Gateways are the Orchestrator’s job sources. Every job an Orchestrator executes arrives from a Gateway.
  • The Gateway establishes a session with the Orchestrator, agreeing on price and verifying capability
  • Jobs arrive as segments (video) or HTTP requests (AI inference), each with a probabilistic micropayment ticket attached
  • The Orchestrator processes the job, returns the result, and accumulates payment tickets
  • Slow or failing responses push the Orchestrator down future Gateway rankings
The Orchestrator does not choose which Gateways to work with - selection runs in the opposite direction. Gateways choose Orchestrators based on capability, price, and performance.

GPU Workers

The Orchestrator process coordinates two types of GPU worker:
  • Transcoder - handles video transcoding. Receives raw segments, applies output profiles (resolution, bitrate, codec), returns encoded segments. Runs either in-process or as a separate transcoder process in an O-T split configuration.
  • AI Runner - handles AI inference. Receives inference requests, routes them to the appropriate loaded model (or loads the model on demand), and returns results. Runs as a Docker container managed by the Orchestrator process.

Arbitrum Protocol

Orchestrators interact with four Arbitrum smart contracts:
The AIServiceRegistry contract (0x04C0b249740175999E5BF5c9ac1dA92431EF34C5 on Arbitrum Mainnet) is separate from the primary ServiceRegistry and is used specifically for AI subnet registration. Enable it with the -aiServiceRegistry flag. The contract is currently detached from the main protocol controller - confirm the current integration status with your setup guide.

Dual Pipeline Architecture

The Orchestrator node (LivepeerNode in go-livepeer) runs two independent processing pipelines - one for video transcoding and one for AI inference. Both pipelines are active simultaneously in a dual-workload configuration.

Video vs AI Pipelines

Request Flow

This is what happens when a Gateway sends a job to an Orchestrator, from receipt through result delivery and payment accumulation.

Lifecycle Steps

  1. Job arrives - The Gateway sends a video segment or AI request with an attached probabilistic micropayment ticket.
  2. Ticket verification - The Orchestrator verifies the ticket is valid (correct signer, sufficient face value, within expected value range).
  3. Pipeline routing - The Orchestrator routes the job to the video pipeline (transcoder) or the AI pipeline (AI runner) based on the job type.
  4. Execution - The worker processes the job. For video: transcodes the segment to all requested output profiles. For AI: runs inference against the loaded model.
  5. Result return - The Orchestrator returns the result to the Gateway over HTTP.
  6. Ticket accumulation - The ticket is stored. At the protocol level, each ticket has a probability of being a “winning” ticket - the Orchestrator redeems only winning tickets on-chain.
  7. On-chain redemption - Winning tickets are submitted to the TicketBroker contract on Arbitrum, releasing ETH to the Orchestrator.

Setup Configurations

The go-livepeer binary supports three physical layouts that map to different hardware scale and operating requirements.
The Orchestrator and Transcoder run as a single process on one machine. This is the default configuration for most solo operators.
  • Simple to operate and monitor
  • Transcoder worker runs in-process (same machine)
  • AI Runner containers managed by the same process
  • Suitable for single-GPU setups
Combined single-node startup
livepeer -orchestrator -transcoder -datadir /path/to/data

Software Components

go-livepeer

The core node software. When started with -orchestrator, it runs as the Orchestrator controller. Handles:
  • Session management with Gateways (negotiation, job receipt, result return)
  • Worker coordination (in-process transcoder or external transcoder via -orchAddr)
  • AI Runner container management (-aiWorker, -aiModels, -aiModelsDir)
  • Payment ticket accumulation and on-chain redemption
  • Prometheus metrics (port 7935 by default)
  • Protocol interactions (reward calls, stake management)
Source: github.com/livepeer/go-livepeer

AI Runner

A Docker container that handles AI inference workloads. The go-livepeer Orchestrator process spawns and manages AI Runner containers for each configured pipeline and model. Containers start on demand, and operators should confirm the current warm-state behaviour in the release they deploy. The -aiModels flag specifies which pipelines and models to load on startup:
Example -aiModels startup flag
-aiModels "text-to-image:stabilityai/stable-diffusion-3-medium-diffusers,audio-to-text:openai/whisper-large-v3"

livepeer_cli

A command-line tool that connects to a running Orchestrator node. Used for:
  • Activating the Orchestrator on-chain
  • Configuring reward cut and fee cut
  • Setting price per unit and price per Gateway
  • Viewing node status, connected Gateways, and current earnings

Arbitrum Contracts

See for deployed addresses on Arbitrum Mainnet and Arbitrum Sepolia (testnet).
Last modified on March 16, 2026