Skip to main content
Gateway operators make three independent choices when deploying a Gateway: setup type (what software to run), operational mode (on-chain or off-chain), and node type (what workloads to route). Setup type: The software you run.
  • Livepeer Protocol go-livepeer (Docker or source deployment) which is the core implementation for Gateway nodes
  • SDK/Custom (Python, browser, mobile) which provide custom language abstractions for the Livepeer Protcol
  • DevOps Managed Platforms (eg. GWID) which abstract the Deployment Operations and provide a ‘one-click deploy’ experience.
  • Hosted (Livepeer Studio, Cloud SPE) which provide Gateway application layer services (not a gateway deployment)
Operational mode: How your gateway integrates with the Livepeer protocol.
  • Off-chain uses a remote signer for payments and manual orchestrator discovery.
  • On-chain connects directly to the Livepeer Network on Arbitrum for payments and automatic pooled orchestrator discovery.
Node type: What workloads your gateway routes.
  • Video (RTMP transcoding)
  • AI (Inference & real-time AI routing)
  • Dual (both pieplines on a single node)

Setup Types

The standard gateway binary. Covers all node types - Video transcoding, AI inference, and Dual (both) - and supports both operational modes.Install options: Docker (recommended) or build from source.Operational mode is an independent choice:Node type is also an independent choice:
  • Video - RTMP transcoding to HLS. Replaces cloud services (Mux, AWS MediaLive, Wowza).
  • AI - inference routing (text-to-image, image-to-video, LLM, audio-to-text, and more).
  • Dual - both video and AI on a single node. Linux only for the AI component.
Correcting a common misconception: Older docs and community guides state that running an off-chain gateway requires your own orchestrator node (and therefore a GPU). This is incorrect. Gateways route to orchestrators on the network. You do not need a GPU.

Next Steps

Last modified on March 16, 2026