跳转到主要内容
This page is a work in progress.
TODO: Edit, Streamline, Format & Style

定义

网关是 Livepeer 网络中的主要需求聚合层。 它们接受终端客户提供的视频转码和AI推理请求,然后将这些任务分发到配备GPU的协调器网络中。 在早期的 Livepeer 文档中,这个角色被称为广播者。 思维模型
Running a Gateway is similar to operating an API Gateway or Load Balancer in cloud computing — it ingests traffic, routes workloads to backend GPU nodes, and manages session flow without doing the heavy compute itself.
Running a Gateway is not like running a validator on Ethereum. Validators secure consensus whereas Gateways route workloads. It’s more akin to a Sequencer on a Layer 2. Just as a Sequencer ingests user transactions, orders them, and routes them into the rollup execution layer, a Livepeer Gateway performs the same function for the Livepeer compute network.
For the rest of us, running a Gateway is like being a film producer. You take a request, assemble the right specialists, manage constraints, and ensure the final result is delivered reliably—without doing every task yourself.

什么是网关?

网关是应用程序进入 Livepeer compute 网络的入口。 它们是连接实时AI和视频工作负载与执行GPU计算的协调器的协调层。 它们作为协议和分布式计算网络之间的关键技术层运行。 网关是一个自托管的 Livepeer 节点,它直接与编排者交互,提交任务,处理付款,并暴露直接的协议接口。 Daydream 等托管服务不是网关。 网关负责
  • 验证请求
  • 选择工作者
  • 将请求转换为工作者 OpenAPI 调用
  • 聚合结果
网关从他们路由的所有作业的交易费用中赚取收入。 如果您来自以太坊背景,网关可以粗略地视为L2汇总中的排序器。 如果您来自传统云背景,网关则类似于API网关或负载均衡器。 任何想要在Livepeer协议之上构建应用程序和服务(如[Daydream]和[Stream.place])的人都会构建自己的网关,以便向Livepeer开发者、构建者和终端用户提供他们的服务,并使他们的应用程序与Livepeer GPU网络(DePIN / Orchestrators)进行通信。

网关的作用

网关处理运行可扩展、低延迟AI视频网络所需的所有服务级逻辑:
  • 作业接收
    他们从使用 Livepeer API、PyTrickle 或 BYOC 集成的应用程序接收工作负载。
  • 功能与模型匹配
    网关确定哪些编排器支持所需的 GPU、模型或流水线。
  • 路由与调度
    他们根据性能、可用性和定价将任务分派给最佳编排器。
  • 市场曝光
    网关运营商可以发布他们提供的服务,包括支持的模型、管道和定价结构。
网关执行_不_执行 GPU 计算。相反,它们专注于协调和服务路由。

Gateway Functions & Services

Learn More About Gateway Functions & Services

为什么网关很重要

随着 Livepeer 过渡到高需求的实时 AI 网络,网关成为关键基础设施。 它们支持:
  • 适用于 Daydream、ComfyStream 和其他实时 AI 视频工具的低延迟工作流
  • 针对计算密集型工作负载的动态 GPU 路由
  • 一个去中心化的计算能力市场
  • 通过 BYOC 管道模型实现灵活集成
网关简化了开发人员的体验,同时保留了 Livepeer 网络的去中心化、性能和竞争力。

摘要

网关是 Livepeer 生态系统的协调和路由层。它们暴露功能,定价服务,接受工作负载,并将它们分发给编排器进行GPU执行。这种设计实现了可扩展、低延迟、面向AI的去中心化计算市场。 这种架构使 Livepeer 能够扩展为实时AI视频基础设施的全球提供商。



WIP: Unsure where below section belongs currently

Key Marketplace Features

1. Capability Discovery

Gateways and orchestrators list:
  • AI model support
  • Versioning and model weights
  • Pipeline compatibility
  • GPU type and compute class
Applications can programmatically choose the best provider.

2. Dynamic Pricing

Pricing can vary by:
  • GPU class
  • Model complexity
  • Latency SLA
  • Throughput requirements
  • Region
Gateways expose pricing APIs for transparent selection.

3. Performance Competition

Orchestrators compete on:
  • Speed
  • Reliability
  • GPU quality
  • Cost efficiency
Gateways compete on:
  • Routing quality
  • Supported features
  • Latency
  • Developer ecosystem fit
This creates a healthy decentralized market.

4. BYOC Integration

Any container-based pipeline can be brought into the marketplace:
  • Run custom AI models
  • Run ML workflows
  • Execute arbitrary compute
  • Support enterprise workloads
Gateways advertise BYOC offerings; orchestrators execute containers.

Protocol Overview

Understand the Full Livepeer Network Design

Marketplace Benefits

  • Developer choice — choose the best model, price, and performance
  • Economic incentives — better nodes earn more work
  • Scalability — network supply grows independently of demand
  • Innovation unlock — new models and pipelines can be added instantly
  • Decentralization — no single operator controls the workload flow

Summary

The Marketplace turns Livepeer into a competitive, discoverable, real-time AI compute layer.
  • Gateways expose services
  • Orchestrators execute them
  • Applications choose the best fit
  • Developers build on top of it
  • Users benefit from low-latency, high-performance AI

参考文献

Unverified Reference
https://github.com/videoDAC/livepeer-gateway
Last modified on March 1, 2026