Skip to main content
In the early days of Livepeer, gateways (then called broadcasters) had a single job: send video streams to orchestrators for transcoding. Today, with the off-chain gateway operational mode shipped in Q4 2025, the role has expanded dramatically. Gateways now route AI inference, live video AI, LLM requests, and custom BYOC workloads - often with zero ETH required. The gateway is where business logic, customer relationships, and service margins live.

Technical Role

A gateway is a demand aggregation and routing layer. It accepts workloads from applications, selects the best orchestrator for each job, handles payment, and returns results while orchestrators perform GPU compute. Core responsibilities:
  • Job intake - receive video streams (RTMP) or AI inference requests (HTTP API)
  • Orchestrator selection - match jobs to capable orchestrators by capability, price, and latency
  • Payment handling - generate probabilistic micropayment tickets (or delegate to a remote signer)
  • Result delivery - return transcoded video (HLS) or inference results to the application
See for the workload matrix and routing details.

Business Role

Gateways earn at the business layer. The gateway operator sets customer pricing, the protocol constrains orchestrator payments, and the difference becomes the operator’s margin. This makes gateways uniquely positioned as the product layer of the Livepeer network:
  • Pricing control - set your own rates independently of network pricing
  • Customer relationships - API keys, auth, SLAs, support - all at the gateway layer
  • Middleware and product logic - billing, rate limiting, orchestrator tiering, custom routing
  • Platform building - the NaaP (Network as a Platform) model wraps Livepeer as a managed service
See for revenue models, cost structures, and the four operator models.

Network Role

Gateways are the demand side of the Livepeer marketplace. Where orchestrators provide compute supply, gateways aggregate application demand and broker access to that supply.
  • Capability discovery - query the network for orchestrators that support specific pipelines, models, or GPU types
  • Marketplace participation - select orchestrators based on price, performance, and reliability
  • Application bridge - translate application-level requests into protocol-level operations
  • Ecosystem growth - every new gateway adds demand capacity to the network
Gateways participate in the network as demand-side actors. Orchestrators handle staking, protocol rewards, and governance.
See for how gateways connect to the protocol and orchestrator network.

Operational Mode

On-chain and off-chain describe your gateway’s operational mode: how it handles payment operations and orchestrator discovery. All workloads run on orchestrator GPU hardware. The distinction is between local ticket signing and delegated remote signing.
Your gateway holds ETH on Arbitrum and generates probabilistic micropayment tickets directly. This is the original operational mode. Required for video transcoding; also supports AI inference.
  • Payment: Gateway signs tickets locally using its own ETH deposit + reserve
  • ETH required: Yes - deposit (~0.065 ETH) + reserve (~0.03 ETH) on Arbitrum
  • Crypto knowledge: Wallet, keystore, Arbitrum bridging
  • OS support: Linux, Windows, macOS
  • Setup time: Hours (wallet setup, bridging, funding)
  • Workloads: Video transcoding, AI inference, or both
An on-chain gateway can run video, AI, or both workloads. Dual-workload configuration runs both from a single on-chain gateway node; it is not a third operational mode. See for setup details.
Last modified on March 16, 2026