Skip to main content
Orchestrator pools are operator-run GPU pools where multiple GPU providers contribute hardware to a single Livepeer orchestrator, which aggregates capacity, routes jobs, and distributes rewards.

About Pools

Joining a community-run pool is a low-lift way to get started with the Livepeer network. Pools are run by experienced operators who manage the day-to-day operations of the network.
Pools are not official Livepeer infrastructure. They are community-run and vary in terms of pricing, rewards, and management. Always research and choose a pool that best fits your needs.
Pool operators handle tasks such as staking, delegating, and managing the orchestrator node, so you don’t need to have LPT or deal with on-chain actions. The trade-off is that you have less control over pricing, routing, and delegation decisions, and the revenue is shared with the pool operator.
Pool vs Orchestrator Comparison
FeatureOrchestrator PoolRun an Orchestrator

Join a Pool

1

Choose a Pool

Joining a pool is akin to entering an operational partnership, rather than being a permissionless protocol action.
There is currently no canonical directory of pools, discovery is off‑chain and social, similar to mining pools. One of the only public Orchestrator pools is the Titan Node.
The first step is to find, research and choose a pool that best fits your needs.
You can find out about reputable pools through social channels, but ensure you do your own research and due diligence before joining any pool.Common discovery channels include:
  • Livepeer community Discord
  • Forum posts and announcements
  • Direct outreach to Orchestrators
  • Existing infrastructure or GPU communities
Before committing hardware, confirm:
  • How earnings are calculated
  • How usage is measured
  • How disputes are handled
  • Whether GPUs can be removed at any time
  • What happens during downtime or network changes
  • On-chain identity and reputation

A legitimate orchestrator pool will clearly publish:
  1. Whether it accepts external GPUs
    • Some orchestrators operate only their own hardware
    • Pools explicitly accept third‑party GPU providers
  2. Supported hardware
    • GPU models (e.g. RTX 3090, A6000, A100, L40S)
    • VRAM requirements
    • Single‑GPU vs multi‑GPU nodes
  3. Supported workloads
    • Transcoding (video encoding)
    • AI inference
    • Real‑time pipelines (e.g. ComfyUI / ComfyStream)
    • Latency‑sensitive vs batch jobs
  4. Revenue split
    • Percentage paid to GPU owner vs pool operator
    • Any performance multipliers or penalties
  5. Payout details
    • Asset used (ETH, USDC, fiat, etc.)
    • Payout frequency
    • Minimum payout thresholds
  6. Operational requirements
    • Uptime expectations
    • Monitoring or alerting requirements
    • Geographic or networking constraints
2

Connect your GPU

Once a pool is selected, the GPU must be connected to the orchestrator’s infrastructure.There are three common connection models:
In this model, your GPU never runs protocol code. It only runs workloads dispatched by the orchestrator.Best For:
  • Most GPU owners
  • Anyone prioritizing security and portability
  • Those who want to run other workloads on the same GPU
Process:
  • The orchestrator provides a container image and configuration
  • You run the container on your GPU machine
  • The container exposes standardized endpoints
  • The orchestrator schedules work to the container
Pros
  • Clear security boundaries
  • Reproducible environments
  • Easier upgrades and rollbacks
  • Minimal trust required between parties
Cons
  • Slightly higher setup complexity
  • Slightly lower performance (due to extra network hop)
The orchestrator will provide a remote access protocol (e.g. SSH) to connect to their infrastructure. The GPU owner is responsible for installing and managing the GPU driver and Livepeer software.Best For:
  • GPU owners with physical machines
  • Home labs or data‑center colocations
Process:
  • You provision a Linux machine with the required GPU drivers
  • The orchestrator provides setup instructions or scripts
  • Secure access (SSH / VPN) is established
  • The GPU is registered internally by the orchestrator
Pros
  • Full Hardware Control
  • No vendor lock‑in or cloud markup
  • Lower latency
  • Easier to manage
Cons
  • Requires physical presence
  • More complex setup & management
  • Responsible for hardware uptime
  • Requires sysadmin experience
The orchestrator will provide a cloud GPU instance for the GPU owner to use. The GPU owner is responsible for managing the instance and the Livepeer software.Best For:
  • Flexible capacity providers
  • Burst or on‑demand contributors
Process:
  • You launch a GPU instance on a cloud provider
  • Required drivers and runtime are installed
  • The orchestrator connects the instance to their pool
  • Jobs are routed when capacity is needed
Pros
  • Fast to scale up or down
  • No physical hardware management
Cons
  • Higher cost base
  • Margin depends heavily on utilization
3

Orchestrator Aggregates Your GPU

Once connected, your GPU becomes part of the orchestrator’s capacity pool.Your GPU is not visible individually to the protocol.
Aggregation is entirely managed by the orchestrator and includes:
  • Adding your GPU to internal capacity tracking
  • Advertising aggregate capacity to gateways
  • Routing jobs across all pooled GPUs
  • Load‑balancing based on performance and availability
From the Livepeer network’s perspective:
  • There is one orchestrator
  • Backed by pooled stake
  • Offering pooled capacity
The orchestrator:
  • Chooses which jobs run on which GPUs
  • Balances latency, cost, and reliability
  • May rotate workloads across contributors
Utilization depends on:
  • Demand on the network
  • Your GPU’s performance
  • Pool pricing strategy
The orchestrator is responsible for:
  • Maintaining uptime SLAs
  • Setting prices for services
  • Preserving on‑chain reputation
If the orchestrator performs well:
  • More jobs are routed to the pool
  • Delegation may increase
  • Earnings rise for all pool contributors
If performance degrades:
  • Job volume drops
  • Earnings decline
4

Earn Rewards

There are no on-chain records of individual GPU contributions, so the pool’s reputation and rewards are entirely dependent on the orchestrator’s performance and reputation.
All rewards are earned by the Orchestrator and distributed off‑chain to GPU contributors. GPU owners do not earn rewards on-chain.
  • Earnings are pooled and split
  • Payouts are made off‑chain
  • No on‑chain rewards for individual GPUs
  1. Usage fees
  • Paid by applications and gateways
  • Based on actual work performed
  • Increasingly dominant as network usage grows
  1. Inflation rewards
  • Minted by the protocol
  • Earned by the orchestrator’s stake
  • Typically shared with pool participants
Payouts are defined by the pool’s terms and may include:
  • Asset type (ETH, USDC, fiat, etc.)
  • Payout schedule (daily, weekly, monthly)
  • Performance adjustments
  • Minimum thresholds
All payouts are off‑chain and depend on the operator’s accounting systems.
Last modified on February 18, 2026