Skip to main content

Definition

Gateways serve as the primary demand aggregation layer in the Livepeer network. They accept video transcoding and AI inference requests from end customers, then distribute these jobs across the network of GPU-equipped Orchestrators. In earlier Livepeer documentation, this role was referred to as a broadcaster. Mental Model
Running a Gateway is similar to operating an API Gateway or Load Balancer in cloud computing — it ingests traffic, routes workloads to backend GPU nodes, and manages session flow without doing the heavy compute itself.
Running a Gateway is not like running a validator on Ethereum. Validators secure consensus whereas Gateways route workloads. It’s more akin to a Sequencer on a Layer 2. Just as a Sequencer ingests user transactions, orders them, and routes them into the rollup execution layer, a Livepeer Gateway performs the same function for the Livepeer compute network.
For the rest of us, running a Gateway is like being a film producer. You take a request, assemble the right specialists, manage constraints, and ensure the final result is delivered reliably—without doing every task yourself.

What is a Gateway?

Gateways are the entry point for applications into the Livepeer compute network. They are the coordination layer that connects real-time AI and video workloads to the orchestrators who perform the GPU compute. They operate as the essential technical layer between the protocol and the distributed compute network. A gateway is a self-operated Livepeer node that interacts directly with orchestrators, submits jobs, handles payment, and exposes direct protocol interfaces. Hosted services like Daydream operate as both application layers and Gateways. A Gateway is responsible for
  • validating requests
  • selecting Workers
  • translating requests into Worker OpenAPI calls
  • aggregating results
If you are coming from an Ethereum background, Gateways could loosely be thought of as sequencers in L2 rollups. If you are coming from a traditional cloud background, Gateways are akin to API gateways or load balancers. Anyone that wants to build applications and services (like Daydream and Stream.place) on top of the Livepeer protocol will build their own Gateway to connect with Orchestrators and route jobs. This enables them to offer their services to Livepeer Developers, Builders & end-users and provide communication of their application with the Livepeer GPU network (DePIN / Orchestrators)

What Gateways Do

Gateways handle all service-level logic required to operate a scalable, low-latency AI video network:
  • Job Intake
    They receive workloads from applications using Livepeer APIs, PyTrickle, or BYOC integrations.
  • Capability & Model Matching
    Gateways determine which orchestrators support the required GPU, model, or pipeline.
  • Routing & Scheduling
    They dispatch jobs to the optimal orchestrator based on performance, availability, and pricing.
  • Marketplace Exposure
    Gateway operators can publish the services they offer, including supported models, pipelines, and pricing structures.
Gateways do not perform GPU compute. Instead, they focus on coordination and service routing.

Gateway Functions & Services

Learn More About Gateway Functions & Services

Why Gateways Matter

As Livepeer transitions into a high-demand, real-time AI network, Gateways become essential infrastructure. They enable:
  • Low-latency workflows for Daydream, ComfyStream, and other real-time AI video tools
  • Dynamic GPU routing for inference-heavy workloads
  • A decentralised marketplace of compute capabilities
  • Flexible integration via the BYOC pipeline model
Gateways simplify the developer experience while preserving the decentralisation, performance, and competitiveness of the Livepeer network.

Summary

Gateways are the coordination and routing layer of the Livepeer ecosystem. They expose capabilities, price services, accept workloads, and dispatch them to orchestrators for GPU execution. This design enables a scalable, low-latency, AI-ready decentralized compute marketplace. This architecture enables Livepeer to scale into a global provider of real-time AI video infrastructure.

Resources

Last modified on March 1, 2026