Orchestrator pools are operator-run GPU pools where multiple GPU providers contribute hardware to a single Livepeer orchestrator, which aggregates capacity, routes jobs, and distributes rewards.
About Pools
Joining a community-run pool is a low-lift way to get started with the Livepeer network. Pools are run by experienced operators who manage the day-to-day operations of the network. Pool operators handle tasks such as staking, delegating, and managing the orchestrator node, so you don’t need to have LPT or deal with on-chain actions. The trade-off is that you have less control over pricing, routing, and delegation decisions, and the revenue is shared with the pool operator.Pool vs Orchestrator Comparison
| Feature | Orchestrator Pool | Run an Orchestrator |
|---|
Join a Pool
Choose a Pool
Joining a pool is akin to entering an operational partnership, rather than being a permissionless protocol action.The first step is to find, research and choose a pool that best fits your needs.
There is currently no canonical directory of pools, discovery is off‑chain and social, similar to mining pools.
One of the only public Orchestrator pools is the Titan Node.
Finding a Pool
Finding a Pool
You can find out about reputable pools through social channels, but ensure you do your own research and due diligence before joining any pool.Common discovery channels include:
Pool Due Diligence Checklist
Pool Due Diligence Checklist
Before committing hardware, confirm:
A legitimate orchestrator pool will clearly publish:
- How earnings are calculated
- How usage is measured
- How disputes are handled
- Whether GPUs can be removed at any time
- What happens during downtime or network changes
- On-chain identity and reputation
A legitimate orchestrator pool will clearly publish:
- Whether it accepts external GPUs
- Some orchestrators operate only their own hardware
- Pools explicitly accept third‑party GPU providers
- Supported hardware
- GPU models (e.g. RTX 3090, A6000, A100, L40S)
- VRAM requirements
- Single‑GPU vs multi‑GPU nodes
- Supported workloads
- Transcoding (video encoding)
- AI inference
- Real‑time pipelines (e.g. ComfyUI / ComfyStream)
- Latency‑sensitive vs batch jobs
- Revenue split
- Percentage paid to GPU owner vs pool operator
- Any performance multipliers or penalties
- Payout details
- Asset used (ETH, USDC, fiat, etc.)
- Payout frequency
- Minimum payout thresholds
- Operational requirements
- Uptime expectations
- Monitoring or alerting requirements
- Geographic or networking constraints
Connect your GPU
Once a pool is selected, the GPU must be connected to the orchestrator’s infrastructure.There are three common connection models:
BYO Container (Preferred)
BYO Container (Preferred)
In this model, your GPU never runs protocol code. It only runs workloads dispatched by the orchestrator.Best For:
- Most GPU owners
- Anyone prioritizing security and portability
- Those who want to run other workloads on the same GPU
- The orchestrator provides a container image and configuration
- You run the container on your GPU machine
- The container exposes standardized endpoints
- The orchestrator schedules work to the container
Pros
- Clear security boundaries
- Reproducible environments
- Easier upgrades and rollbacks
- Minimal trust required between parties
Cons
- Slightly higher setup complexity
- Slightly lower performance (due to extra network hop)
Bare Metal
Bare Metal
The orchestrator will provide a remote access protocol (e.g. SSH) to connect to their infrastructure.
The GPU owner is responsible for installing and managing the GPU driver and Livepeer software.Best For:
- GPU owners with physical machines
- Home labs or data‑center colocations
- You provision a Linux machine with the required GPU drivers
- The orchestrator provides setup instructions or scripts
- Secure access (SSH / VPN) is established
- The GPU is registered internally by the orchestrator
Pros
- Full Hardware Control
- No vendor lock‑in or cloud markup
- Lower latency
- Easier to manage
Cons
- Requires physical presence
- More complex setup & management
- Responsible for hardware uptime
- Requires sysadmin experience
Cloud GPU
Cloud GPU
The orchestrator will provide a cloud GPU instance for the GPU owner to use. The GPU owner is responsible for managing the instance and the Livepeer software.Best For:
- Flexible capacity providers
- Burst or on‑demand contributors
- You launch a GPU instance on a cloud provider
- Required drivers and runtime are installed
- The orchestrator connects the instance to their pool
- Jobs are routed when capacity is needed
Pros
- Fast to scale up or down
- No physical hardware management
Cons
- Higher cost base
- Margin depends heavily on utilization
Orchestrator Aggregates Your GPU
Once connected, your GPU becomes part of the orchestrator’s capacity pool.Your GPU is not visible individually to the protocol.
Aggregation Details
Aggregation Details
Aggregation is entirely managed by the orchestrator and includes:
- Adding your GPU to internal capacity tracking
- Advertising aggregate capacity to gateways
- Routing jobs across all pooled GPUs
- Load‑balancing based on performance and availability
- There is one orchestrator
- Backed by pooled stake
- Offering pooled capacity
Scheduling and Utilisation
Scheduling and Utilisation
The orchestrator:
- Chooses which jobs run on which GPUs
- Balances latency, cost, and reliability
- May rotate workloads across contributors
- Demand on the network
- Your GPU’s performance
- Pool pricing strategy
Uptime, Pricing, and Reputation
Uptime, Pricing, and Reputation
The orchestrator is responsible for:
- Maintaining uptime SLAs
- Setting prices for services
- Preserving on‑chain reputation
- More jobs are routed to the pool
- Delegation may increase
- Earnings rise for all pool contributors
- Job volume drops
- Earnings decline
Earn Rewards
All rewards are earned by the Orchestrator and distributed off‑chain to GPU contributors.
GPU owners do not earn rewards on-chain.
- Earnings are pooled and split
- Payouts are made off‑chain
- No on‑chain rewards for individual GPUs
Reward Sources
Reward Sources
- Usage fees
- Paid by applications and gateways
- Based on actual work performed
- Increasingly dominant as network usage grows
- Inflation rewards
- Minted by the protocol
- Earned by the orchestrator’s stake
- Typically shared with pool participants
Payouts
Payouts
Payouts are defined by the pool’s terms and may include:
- Asset type (ETH, USDC, fiat, etc.)
- Payout schedule (daily, weekly, monthly)
- Performance adjustments
- Minimum thresholds