Deployment
A deployment is the complete configuration of a Livepeer orchestrator, defined by three independent choices: node mode, deployment type, and scale. These axes are orthogonal - each can be selected independently of the others.Node Mode
The node mode determines what workloads the orchestrator accepts and processes. This is a configuration choice, not a protocol distinction - the same go-livepeer binary supports all three modes.Dual Mode
Dual mode is a dual-workload configuration where a single orchestrator process handles both video transcoding and AI inference. This is the most common production configuration for operators with capable hardware (24 GB+ VRAM GPUs). Dual mode is not a separate protocol mode - it is the same orchestrator process with both video and AI capabilities enabled. The orchestrator advertises both capability types and accepts jobs of either kind.Previous terminology: Dual mode has been referred to by several names in earlier documentation and community discussion:
- “Combined mode” - used in v1 docs and setup-options.mdx to describe running orchestrator + transcoder in a single process. This term conflates two different concepts: (1) combined O+T process (vs O-T split) and (2) combined video+AI workloads. Dual mode refers specifically to the workload combination, not the process architecture.
- “Hybrid” - used in community discussions and the L0 product exercise. Accurate but informal. Dual mode is the canonical term matching the gateway glossary’s Dual node type.
- “O-T model” - sometimes confused with dual mode, but O-T split refers to the deployment type (separating orchestrator and transcoder processes), not the node mode (which workloads are processed). An O-T split can run in any node mode, including dual.
Deployment Type
The deployment type determines how the orchestrator infrastructure is organised.Deployment type is independent of node mode. A solo operator can run in Video, AI, or Dual mode. An O-T split can run any node mode. A pool worker typically runs whatever workloads the pool operator accepts.
Protocol Terms
Active set
Active set
The group of orchestrators eligible to receive video transcoding work in the current round. Membership is determined by total bonded stake (self-stake + delegated stake), ranked by position. The active set size is a protocol parameter.AI inference routing does not require active set membership - it prioritises capability and price over stake position.
Round
Round
A protocol time period of approximately 22 hours (5760 Ethereum L1 blocks). Each round, active orchestrators can call
Reward() to claim their share of newly minted LPT. Rounds are sequential and continuous.Reward call
Reward call
The on-chain transaction (
Reward()) that an orchestrator calls once per round to claim minted LPT. Missing a reward call forfeits that round’s rewards permanently. Gas cost on Arbitrum is approximately $0.01-0.12 per call.Reward cut
Reward cut
The percentage of inflation rewards that the orchestrator keeps before distributing the remainder to delegators. Set by the orchestrator. A lower reward cut means more goes to delegators, which can attract more delegation.
Fee cut
Fee cut
The percentage of ETH service fees that the orchestrator keeps before distributing the remainder to delegators. Separate from reward cut. Both are set independently by the orchestrator.
Stake (self-stake)
Stake (self-stake)
LPT bonded directly by the orchestrator to its own address. Self-stake demonstrates commitment and is a prerequisite for activation. Self-stake weight is identical to delegated stake weight for active set ranking.
Delegated stake
Delegated stake
LPT bonded to an orchestrator by delegators. Combined with self-stake, delegated stake determines the orchestrator’s total bonded stake, which affects active set position and governance vote weight.
Activation / deactivation
Activation / deactivation
The on-chain transaction that registers (activates) or unregisters (deactivates) an orchestrator on the Livepeer protocol. An active orchestrator appears in the orchestrator pool and is eligible for the active set. Deactivation removes the orchestrator from eligibility.
Service URI
Service URI
The public URL that an orchestrator advertises on-chain so gateways can connect to it. Must be reachable from the internet. Format:
https://your-domain:8935 or similar.Operational Terms
Orchestrator (process)
Orchestrator (process)
The go-livepeer process running with the
-orchestrator flag. Handles protocol interaction, job routing, payment negotiation, and capability advertisement. In a combined deployment, also runs the transcoder. In an O-T split, the orchestrator process runs separately from the transcoder.Transcoder (process)
Transcoder (process)
The go-livepeer process running with the
-transcoder flag. Performs the actual GPU compute work (video re-encoding, AI inference). In a combined deployment, runs within the same process as the orchestrator. In an O-T split, runs on a separate machine.AI worker / AI runner
AI worker / AI runner
The container that executes AI inference jobs. go-livepeer communicates with the AI runner via HTTP. The AI runner loads models into GPU memory and processes inference requests. Configured via
aiModels.json and the -aiWorker / -aiModels flags.Session
Session
A logical connection between a gateway and an orchestrator for a specific job. Video sessions are stream-based (one per active stream). AI sessions are job-based (one per inference request or batch). The
-maxSessions flag limits concurrent sessions.Segment
Segment
A short chunk of video (typically ~2 seconds) that represents the unit of work for video transcoding. Gateways split incoming streams into segments and distribute them to orchestrators. Orchestrators transcode each segment independently.
Capability
Capability
A specific workload type that an orchestrator can process. Video capabilities are implicit (all orchestrators with NVENC support video). AI capabilities are explicit - each pipeline and model is registered individually via
aiModels.json and optionally advertised on-chain via -aiServiceRegistry.Warm model / cold model
Warm model / cold model
A warm model is loaded into GPU memory and ready for immediate inference. A cold model must be loaded from disk before processing, adding seconds to minutes of latency. During the current beta, orchestrators typically support one warm model per GPU.
Pipeline
Pipeline
A specific AI workload type: text-to-image, image-to-image, image-to-video, audio-to-text, LLM, live-video-to-video, etc. Each pipeline has its own API endpoint, model requirements, and pricing.
BYOC (Bring Your Own Container)
BYOC (Bring Your Own Container)
Custom AI inference containers that orchestrators can run alongside standard Livepeer pipelines. BYOC containers must conform to the Livepeer AI worker API specification.
Pool
Pool
A shared infrastructure arrangement where a pool operator runs the orchestrator node and multiple pool workers connect as remote transcoders. The pool operator handles on-chain operations; workers provide GPU compute. Earnings are distributed by the pool operator.
Economic Terms
PM ticket (probabilistic micropayment)
PM ticket (probabilistic micropayment)
The payment unit in the Livepeer protocol. Gateways send lottery tickets to orchestrators for each job. Most tickets are non-winning; winning tickets can be redeemed on-chain for ETH. Over time, the expected value of winning tickets equals the fair payment for work performed.
Ticket redemption
Ticket redemption
The on-chain transaction where an orchestrator redeems a winning PM ticket on the Arbitrum TicketBroker contract to receive ETH. Redemption costs gas. Orchestrators typically batch redemptions to optimise gas costs.
pricePerUnit
pricePerUnit
The flag (
-pricePerUnit) that sets the orchestrator’s video transcoding price in wei per pixel. Gateways with -maxPricePerUnit below this value will not route work to the orchestrator.pricePerGateway
pricePerGateway
The flag that allows setting different prices for different gateway addresses. Useful for commercial relationships where specific gateways receive preferential pricing.
autoAdjustPrice
autoAdjustPrice
A flag that enables dynamic price adjustment based on current demand. When enabled, the orchestrator automatically adjusts its price based on session utilisation.
Inflation rewards
Inflation rewards
Newly minted LPT distributed to active orchestrators each round. The inflation rate is a protocol parameter. Rewards are proportional to total bonded stake. Orchestrators must call
Reward() each round to claim.Service fees
Service fees
ETH earned from processing actual jobs (video transcoding segments, AI inference requests). Paid by gateways via PM tickets. Revenue depends on workload volume, pricing, and gateway selection.
Earnings
Earnings
The combined total of inflation rewards (LPT) and service fees (ETH) earned by an orchestrator. Operators split earnings with delegators according to their reward cut and fee cut settings.
Deprecated Terms
Broadcaster (deprecated)
Broadcaster (deprecated)
The pre-2023 name for a gateway. The
-broadcaster flag has been replaced by -gateway. The go-livepeer codebase internally still uses “BroadcasterNode” as the enum name. The CLI tool livepeer_cli displays “BROADCASTER STATS” - this refers to gateway metrics using the legacy term.Combined mode (ambiguous - avoid)
Combined mode (ambiguous - avoid)
Used in v1 documentation to mean two different things: (1) running orchestrator + transcoder in a single process (vs O-T split), and (2) running both video + AI workloads. To avoid confusion, use “single-process deployment” for the architecture meaning and “dual mode” for the workload meaning.
Hybrid (informal - use Dual Mode)
Hybrid (informal - use Dual Mode)
Community shorthand for running both video and AI workloads. The canonical term is dual mode (matching the gateway glossary’s Dual badge). “Hybrid” may appear in community discussions, Discord, and planning documents.
Pool worker (renamed - use Pool Node)
Pool worker (renamed - use Pool Node)
Earlier v2 documentation used “pool worker” for a node running in transcoder mode within a pool. The canonical term is now pool node. Community terms include “worker”, “miner” (Titan Node), “GPU contributor”, and “transcoder”. The go-livepeer flag is
-transcoder in all cases.