This page is a work in progress.
TODO: Edit, Streamline, Format & Style
TODO: Edit, Streamline, Format & Style
定义
网关是 Livepeer 网络中的主要需求聚合层。 它们接受终端客户提供的视频转码和AI推理请求,然后将这些任务分发到配备GPU的协调器网络中。 在早期的 Livepeer 文档中,这个角色被称为广播者。 思维模型From a Cloud Background?
From a Cloud Background?
Running a Gateway is similar to operating an API Gateway or Load Balancer in cloud computing —
it ingests traffic, routes workloads to backend GPU nodes, and manages session flow
without doing the heavy compute itself.
From an Ethereum Background?
From an Ethereum Background?
Running a Gateway is not like running a validator on Ethereum.
Validators secure consensus whereas Gateways route workloads. It’s more akin to a Sequencer on a Layer 2.
Just as a Sequencer ingests user transactions, orders them, and routes them into the rollup execution layer,
a Livepeer Gateway performs the same function for the Livepeer compute network.
Neither? You can still run a gateway!
Neither? You can still run a gateway!
For the rest of us, running a Gateway is like being a film producer.
You take a request, assemble the right specialists, manage constraints,
and ensure the final result is delivered reliably—without doing every task yourself.
什么是网关?
网关是应用程序进入 Livepeer compute 网络的入口。 它们是连接实时AI和视频工作负载与执行GPU计算的协调器的协调层。 它们作为协议和分布式计算网络之间的关键技术层运行。 网关是一个自托管的 Livepeer 节点,它直接与编排者交互,提交任务,处理付款,并暴露直接的协议接口。 Daydream 等托管服务不是网关。 网关负责- 验证请求
- 选择工作者
- 将请求转换为工作者 OpenAPI 调用
- 聚合结果
网关的作用
网关处理运行可扩展、低延迟AI视频网络所需的所有服务级逻辑:-
作业接收
他们从使用 Livepeer API、PyTrickle 或 BYOC 集成的应用程序接收工作负载。 -
功能与模型匹配
网关确定哪些编排器支持所需的 GPU、模型或流水线。 -
路由与调度
他们根据性能、可用性和定价将任务分派给最佳编排器。 -
市场曝光
网关运营商可以发布他们提供的服务,包括支持的模型、管道和定价结构。
Gateway Functions & Services
Learn More About Gateway Functions & Services
为什么网关很重要
随着 Livepeer 过渡到高需求的实时 AI 网络,网关成为关键基础设施。 它们支持:- 适用于 Daydream、ComfyStream 和其他实时 AI 视频工具的低延迟工作流
- 针对计算密集型工作负载的动态 GPU 路由
- 一个去中心化的计算能力市场
- 通过 BYOC 管道模型实现灵活集成
摘要
网关是 Livepeer 生态系统的协调和路由层。它们暴露功能,定价服务,接受工作负载,并将它们分发给编排器进行GPU执行。这种设计实现了可扩展、低延迟、面向AI的去中心化计算市场。 这种架构使 Livepeer 能够扩展为实时AI视频基础设施的全球提供商。Marketplace Content
Marketplace Content
Key Marketplace Features
1. Capability Discovery
Gateways and orchestrators list:- AI model support
- Versioning and model weights
- Pipeline compatibility
- GPU type and compute class
2. Dynamic Pricing
Pricing can vary by:- GPU class
- Model complexity
- Latency SLA
- Throughput requirements
- Region
3. Performance Competition
Orchestrators compete on:- Speed
- Reliability
- GPU quality
- Cost efficiency
- Routing quality
- Supported features
- Latency
- Developer ecosystem fit
4. BYOC Integration
Any container-based pipeline can be brought into the marketplace:- Run custom AI models
- Run ML workflows
- Execute arbitrary compute
- Support enterprise workloads
Protocol Overview
Understand the Full Livepeer Network Design
Marketplace Benefits
- Developer choice — choose the best model, price, and performance
- Economic incentives — better nodes earn more work
- Scalability — network supply grows independently of demand
- Innovation unlock — new models and pipelines can be added instantly
- Decentralization — no single operator controls the workload flow
Summary
The Marketplace turns Livepeer into a competitive, discoverable, real-time AI compute layer.- Gateways expose services
- Orchestrators execute them
- Applications choose the best fit
- Developers build on top of it
- Users benefit from low-latency, high-performance AI