Livepeer is a decentralised serverless GPU fabric with a cryptoeconomic control plane, where services are exposed through a set of developer-friendly products and applications, enabling real-time compute infrastructure.Documentation Index
Fetch the complete documentation index at: https://docs.livepeer.org/llms.txt
Use this file to discover all available pages before exploring further.
Protocol vs Network vs Platform The protocol provides trust, coordination and payment mechanisms, the network supplies compute, routing, and verification, and platforms expose the network’s capabilities in a usable way.
Infrastructure Layers
Livepeer is a decentralised serverless GPU fabric with a cryptoeconomic control plane, where services are exposed through a set of developer-friendly products and applications, enabling real-time compute infrastructure. Livepeer’s crypto-economic primitives and decentralised compute mesh provide additional benefits to the system such as censorship resistance, economic security, and trustless coordination.Livepeer Protocol and Network Architecture
Livepeer’s crypto-economic primitives and decentralised compute mesh provide additional benefits to the system such as censorship resistance, economic security, and trustless coordination.Protocol contracts
The Livepeer Protocol is a set of Solidity contracts deployed to Arbitrum One. Five contracts carry the load:BondingManager tracks stake and delegation, TicketBroker issues and redeems probabilistic micropayment tickets, RoundsManager advances the protocol clock, Minter issues LPT inflation, and Controller is the upgrade authority that registers all the others.
These contracts run unchanged across every node in the network. An Orchestrator earns by redeeming winning tickets at TicketBroker and reward calls at BondingManager; a Delegator earns by bonding LPT through BondingManager. See Protocol Architecture for contract addresses and ABIs.
Network nodes
The network layer is a single binary,go-livepeer, run in different modes. One mode is the Gateway: it accepts video and AI jobs from clients, selects an Orchestrator, and settles payment in tickets. Another is the Orchestrator: it advertises capabilities, receives jobs, runs them, and redeems winning tickets on-chain. The transcoder mode is a worker that an Orchestrator can split off onto a separate machine to scale horizontally. Newer modes – redeemer and remote signer – separate ticket redemption and key custody from the live job path so that Gateway implementations in other languages can integrate.
A small operator runs a single binary that fills both Gateway and Orchestrator roles. A larger operator splits the modes onto separate machines: Orchestrator on the network edge, transcoders or AI workers behind it on a private subnet. See Network Architecture for the deployment topology.
Off-chain coordination
Most of what happens on the network never touches a contract. Gateways discover Orchestrators through the on-chain subgraph, direct configuration, a webhook, or the Network Capabilities API. Orchestrators advertise capabilities and prices inOrchestratorInfo messages. Payment runs in probabilistic micropayment tickets that batch off-chain until a winning ticket triggers an on-chain redemption. This off-chain layer is what makes per-frame, per-pixel pricing economical.
The off-chain loop is what scales the network: thousands of jobs and tickets per second between Gateway and Orchestrator, with on-chain settlement only when a ticket wins. See Marketplace and Discovery for the full discovery and selection algorithm.
AI Runtime
The AI runtime sits inside the Orchestrator.ai-worker is a Go subsystem in go-livepeer that owns the Orchestrator-side job lifecycle: it receives an AI job from the Gateway, picks a registered pipeline, starts or wakes the corresponding container, and streams frames in and out. Each pipeline runs as a separate ai-runner Python container, isolated by GPU and model. The transport between ai-worker and ai-runner for real-time work is the trickle protocol; for batch work it is a request/response HTTP API. ComfyStream is one such container, offering a ComfyUI-graph runtime for real-time video-to-video pipelines.
An Orchestrator declares which pipelines it serves through aiModels.json, which sets per-pipeline pricing and warm-model strategy. See AI Capabilities for the pipeline catalogue and pricing units.