Skip to main content
A gateway is the node that routes your AI inference requests to orchestrators on the Livepeer network. By default, you access a gateway hosted by Livepeer Studio or a community provider. When you run your own, you control the orchestrator selection, the auth model, and the cost structure. This page helps you decide whether self-hosting is the right move for where you are now.

When to run your own gateway

If you need…Self-hosted gatewayHosted API (Studio / community)
Get started fastNo — setup adds overheadYes — API key and go
Cost savings at scaleYes — direct orchestrator settlement, no hosted-API markupNo — hosted provider margin on top of network price
Custom orchestrator selectionYes — pass any -orchAddr list or custom discovery endpointNo — provider controls routing
Data stays within your infrastructureYes — gateway runs on your servers; requests never leave your stackNo — requests route through provider
Production resilience / redundancyYes — run multiple gateways, control failover logicPartial — depends on provider SLAs
Custom auth or billing modelYes — integrate your own user management, remote signer, JWT layerNo — provider auth model only
Zero infrastructure overheadNo — you own the binary and the machineYes — nothing to run
The natural path for most developers is: start with the hosted API, build and validate your application, then self-host as usage grows and the cost or control trade-offs become worth the overhead. The Nov 2025 Network Vision blog described this arc directly:
“Similarly to how startups start to build on platforms like Heroku, Netlify, or Vercel, and then as they scale and need more control and cost savings they build direct on AWS, and then ultimately move to their own datacenters after reaching even more scale — users of Daydream or a real-time Agent platform built on Livepeer, may ultimately choose to run their own gateway.”
There is no hard threshold at which you must switch. The signals are: your monthly API spend is material, you want to specify which orchestrators handle your jobs, or you need the inference path to stay within your own infrastructure.

What self-hosting requires

This is not a setup guide — for that, go to the Gateways tab. This is a realistic checklist of what you are signing up for before you commit.
Windows and macOS binaries for the AI gateway are not currently available. Running a self-hosted AI gateway requires Linux or Docker.
RequirementAI gateway (off-chain)Video gateway (on-chain)
Operating systemLinux (or Docker on any host)Linux
ETH / on-chain accountNot requiredRequired — ETH account + Arbitrum RPC URL
Staking / LPTNot requiredNot required (for gateway role)
go-livepeer binaryRequired — Linux binary or livepeer/go-livepeer:master Docker imageRequired — same binary
Orchestrator listRequired — at least one -orchAddr endpoint to route toRequired — network discovery via on-chain signalling
Open portPort 8937 (default) accessible from your appPort 8937 (default)
Time to first request~15 minutes with Docker; longer for binary + configLonger — requires ETH account setup and on-chain registration
The AI gateway path is designed for developers, not infrastructure operators. A single Docker command launches a functional gateway. The on-chain video gateway path is more involved and is primarily relevant to operators running the full Livepeer transcoding node.

The two gateway types

TypeUse forOn-chain?ETH required?Where to start
AI gateway (off-chain)AI inference — text-to-image, LLM, ComfyStream, BYOCNoNoSet up an AI Gateway
Video gateway (on-chain broadcaster)Video transcoding, HLS deliveryYesYesSet up a Video Gateway
The public gateway at dream-gateway.livepeer.cloud and the Livepeer Studio AI API are both off-chain AI gateway implementations of the same go-livepeer binary. When you self-host, you run that same binary yourself.

Next steps

Last modified on March 16, 2026