Hobbyist vs Commercial
The Livepeer Orchestrator ecosystem supports both models, but they reward different behaviour. One is optimised for lower-risk participation and base rewards. The other is optimised for being chosen, trusted, and paid for by Gateways running user-facing products. Neither model is inherently better. They serve different operator goals. Many of the strongest nodes eventually run a hybrid, using inflation as the base layer and service fees as the part that scales.Why Service Fees Scale
Commercial operators care most about ETH service fees because traffic, pricing, and uptime drive them directly. That is the core shift in the business case: the upside comes from being part of an application’s serving path instead of relying on larger bonded LPT alone. A large-stake Orchestrator earns a fixed percentage of the round’s inflation regardless of how many jobs it processes. An Orchestrator serving high-volume AI inference workloads earns ETH proportional to every pixel processed and every model inference returned. For an Orchestrator actively serving a high-volume Gateway - a streaming platform, an AI product, or a live video application - monthly ETH fee income from job processing can exceed LPT inflation income by a substantial margin.Commercial fee income is variable and depends on Gateway demand, job mix, and market pricing conditions. Inflation rewards are predictable by stake. Most commercial operators treat inflation as the base layer and fees as the upside.
What Commercial Operation Requires
This is where many otherwise capable operators self-select out. Commercial service is not about running the same stack a little harder. It means meeting operational expectations that Gateways can depend on in front of their own users.Uptime and reliability
A Gateway operator building a product on Livepeer’s network needs the Orchestrators it routes to to be consistently available. If an Orchestrator fails mid-session, the Gateway must failover - introducing latency and degraded user experience. Repeated failures result in the Orchestrator being deprioritised in the Gateway’s selection algorithm. Commercial Orchestrators target 99%+ uptime. This requires:- Automated monitoring with immediate alerts on node failure
- Automated restart and recovery
- Stable, redundant connectivity (not shared home broadband)
- Consistent power supply (UPS or colocation)
- Hardware health monitoring (GPU temperatures, VRAM utilisation)
Model warm-up management
For AI inference workloads, cold model starts (loading a model from disk into VRAM on first request) introduce latency that breaks user-facing SLAs. Commercial AI Orchestrators pre-load all advertised models at startup and keep them warm. The practical implication: the VRAM requirements for commercial AI operation are determined by the sum of all models that must be simultaneously loaded, not only the largest single model.Latency targets
Gateways rank Orchestrators by response latency, uptime history, and job success rate. Consistently slow responses- even within acceptable job completion time - affect long-term selection probability.
- Network proximity to high-volume Gateways
- Low GPU scheduling latency (dedicated GPU, not shared)
- Fast storage for model weights (NVMe preferred over SATA)
Working with Gateways
Anonymous discovery is enough to get started. It is rarely enough to build durable commercial traffic. Commercial operators still need to be competitively discoverable, but they also work deliberately to become a reliable option for specific Gateway needs.Per-Gateway pricing
The-pricePerGateway flag allows Orchestrators to set different prices for specific Gateway
addresses. This is the primary tool for commercial Gateway relationships:
per-gateway pricing
Capability signalling
Gateways discover AI capabilities through the capability manifest returned during session negotiation. Commercial Orchestrators ensure their declared capabilities are accurate and stable - advertising a model that is slow to load or frequently unavailable damages the Gateway’s product and the Orchestrator’s selection score. Practical discipline for commercial capability management:- Declare only models that are loaded and warm at startup
- Remove capability declarations for models that are not being actively served
- Use
-aiModelsto specify exactly which pipeline/model combinations to load on startup - Monitor model load times and remove slow-start models from the active set
Building Gateway relationships
Active commercial relationships with Gateways typically develop through:- Consistent performance history visible on the Livepeer Explorer
- Participation in the
#orchestratorschannel on the Livepeer Discord - Direct outreach to Gateway SPEs and ecosystem partners
- Demonstrated capability support for pipelines that specific Gateways need
How to Position for Commercial Workloads
The shift from passive inflation earner to active commercial operator usually means narrowing focus, not broadening it. The goal is to become reliably good at the workloads and service levels that a Gateway actually wants to buy. Use tools.livepeer.cloud/ai/network-capabilities to check current routed pipelines and prices before narrowing your capability set.Capability selection
Capability selection
Commercial operators do not win by listing everything. They win by being reliably good at work
that Gateways are already trying to source. Check current network demand at
tools.livepeer.cloud/ai/network-capabilities
to see which pipelines are being routed and at what prices.Prioritise:
- Pipelines with few available Orchestrators and active demand
- High-VRAM models that exclude commodity GPU competition
- Cascade AI pipelines if hardware supports it - higher per-job value
Pricing discipline
Pricing discipline
Commercial pricing is part market positioning and part relationship management. It requires:
- Understanding the Gateway’s
maxPricePerUnitceiling for each pipeline - Setting prices that are competitive but not floor-level (under-pricing signals low quality to some Gateway operators)
- Using
-pricePerGatewayto offer volume discounts to specific Gateways - Using
-autoAdjustPricecarefully - automatic adjustment can undercut commercial relationships
Infrastructure investment
Infrastructure investment
Commercial operations typically require infrastructure changes that hobbyist setups do not:
- Colocation or cloud GPU instead of home hardware, for reliability and connectivity
- Dedicated GPUs with no competing workloads (mining rigs sharing GPUs with Livepeer introduce unpredictable latency)
- Redundant connectivity with failover (not a single home ISP connection)
- UPS or colocation power for uptime targets above 99%
Monitoring and alerting
Monitoring and alerting
Commercial uptime targets require monitoring that catches problems before a Gateway notices them. go-livepeer exposes a Prometheus
metrics endpoint (port 7935 by default). Connect this to an alerting stack (Grafana,
PagerDuty, or equivalent) to detect:
- Node offline or unreachable
- GPU memory pressure (model eviction)
- Reward call failures
- Unusual session failure rates
The Commercial Operator Landscape
Commercial operation does not look the same across the network. The common thread is fee revenue, but the operating model changes depending on who owns the GPUs, who manages stake, and how traffic is sourced. Pool operators manage the Orchestrator registration, on-chain staking, and reward calling for a fleet of GPU workers. Workers register under the pool’s Orchestrator address; the pool earns a margin on their job income. Pool operators function as GPU infrastructure businesses, combining the service fee model with a managed-Orchestrator offering. Enterprise GPU operators run dedicated fleets serving specific AI application workloads. These operators serve Gateways that power user-facing AI products and require SLA-level commitments. Their hardware is typically data-centre grade with redundant connectivity. Dual-workload operators run both video transcoding and AI inference from the same infrastructure, earning fees from both streams. This is the natural next step for video Orchestrators who invest in high-VRAM GPUs.The Livepeer Forum and the
#orchestrators Discord channel are the
best current sources for tracking active commercial operators and the workloads they are serving.Related Pages
Operating Rationale
Financial evaluation - costs, revenue streams, and the decision matrix for choosing your path.
Pricing Strategy
How to configure competitive prices for video and AI workloads, including per-Gateway rates.
Working with Gateways
The technical and operational details of the Gateway-Orchestrator relationship.
Operator Impact
Why operating an Orchestrator matters beyond earnings - governance weight, network stewardship, and protocol influence.