How gateways find you
There are four mechanisms by which a gateway builds its list of orchestrators to consider:On-chain discovery
The gateway queries the Livepeer subgraph on Arbitrum for all registered, active orchestrators. Your on-chain registration — service URI, stake, cuts — makes you visible here automatically once you are in the active set.
Direct configuration
Gateway operators can configure specific orchestrators directly with
-orchAddr. This bypasses the discovery process entirely. Private pools and enterprise setups often use direct configuration.Webhook discovery
Dynamic discovery via an external service:
-orchWebhookUrl=https://discovery.example.com/orchestrators. The gateway calls this URL to get a list of eligible orchestrators, enabling custom filtering or whitelisting without on-chain registration.Network Capabilities API
The gateway calls
GET /getNetworkCapabilities to query what capabilities and models are available across the network. This is primarily used for AI workload routing.aiModels.json configuration is what makes you discoverable for specific pipelines.
What you advertise to gateways
Every time a gateway queries you, your node responds with anOrchestratorInfo message containing your full offering:
Example OrchestratorInfo fields
capabilities field declares what transcoding profiles and output formats you support. The capabilities_prices field contains your per-pipeline AI pricing. Both are built automatically from your go-livepeer configuration and aiModels.json.
How gateways select you
Discovery gives a gateway a list of candidates. Selection narrows that list to one (or a few) nodes that actually receive a given job. The selection algorithm is multi-factor: The five selection factors: Practical implication for AI jobs: Capability match and price dominate AI routing. Your stake is less important than for video transcoding. If you want AI jobs, ensure your pipeline is registered, your warm model is loaded, and your price is within market range. Practical implication for video jobs: Price and performance score are the primary drivers after entering the active set. A competitive price and high success rate will consistently outperform a lower-ranked orchestrator with a high price.What you can control to get more work
1. Price competitively
1. Price competitively
Pricing is binary before it is graduated: if your price exceeds the gateway’s maximum, you receive zero work from that gateway. After clearing the ceiling, lower prices increase your attractiveness.Check current market rates:
- Compare your
-pricePerUnitto other active orchestrators on Livepeer Explorer - For AI, check tools.livepeer.cloud to see per-pipeline pricing from other operators
-maxPricePerUnit for transcoding and -maxPricePerCapability (a JSON structure) for AI pipelines. You cannot see what individual gateways have set these to — but you can infer from the market.See Pricing Strategy for how to adjust your prices.2. Keep your service URI correct and reachable
2. Keep your service URI correct and reachable
If gateways cannot connect to your service URI, you receive no work — even if you are in the active set, even if your price is competitive.Test reachability:Common causes of unreachability: IP changed without on-chain update, firewall change blocking port 8935, certificate issue on TLS endpoint, NAT not forwarding correctly.Update your on-chain service URI via
Test service URI reachability
livepeer_cli if your IP or hostname has changed.3. Register your AI capabilities correctly
3. Register your AI capabilities correctly
For AI jobs, capability registration is the prerequisite. Your pipelines and warm models must be visible to the network.Verify:Also check externally:
Check registered capabilities
- tools.livepeer.cloud/ai/network-capabilities shows all AI-capable orchestrators visible to the network and which models are warm
aiModels.json and confirm the AI runner container started successfully.4. Maintain high performance scores
4. Maintain high performance scores
Performance scoring is based on your historical success rate and latency. Missed segments, slow responses, and OOM failures all decrease your score and reduce your selection probability.What drives good performance: stable hardware, sufficient VRAM, fast network, and consistent uptime. A node that fails 5% of its segments will eventually score lower than a node with identical pricing but near-zero failures.
5. Build stake for video transcoding
5. Build stake for video transcoding
For video transcoding, selection probability is weighted by total stake. Being in the top 10 by stake vs top 50 means meaningfully more job volume from stake-weighted gateways.For AI workloads, stake is less decisive. Capability and price are the primary routing criteria.
Gateway Loki API — understanding selection decisions
The Livepeer Foundation operates a public Loki instance that exposes gateway logs. This API lets you see what is happening inside gateway nodes — including why specific orchestrators were or were not selected. Base URL:https://loki.livepeer.report
Query gateway Loki logs
- Selection events including or excluding your orchestrator address
- Price rejection messages (your price exceeding the gateway maximum)
- Capability mismatch messages (requested pipeline not found in your offerings)
- Connection failures (gateway could not reach your service URI)
jq for readable formatting.
Debugging missing jobs
Use this checklist when you are in the active set and job flow stays at zero:Configure Pricing
Setting pricePerUnit and per-capability AI pricing to be competitive.
AI Configuration
Setting up aiModels.json and capability registration.
Troubleshooting
Full error reference including service URI and capability issues.
Orchestrator Tools
Explorer, Prometheus, and Loki tools for understanding network state.