Skip to main content
The most common combined deployment is an off-chain gateway routing to a dedicated on-chain orchestrator. The gateway handles inbound client traffic; the orchestrator handles on-chain protocol participation. Running both roles on one machine is supported but requires deliberate port separation.

Run both a gateway and an orchestrator when you need control over client traffic, routing, and workload execution end to end. This page explains the deployment patterns, port separation, self-routing choices, and price alignment rules that matter when one operator owns both roles. For detailed gateway setup, see the guide. For orchestrator setup, start with the .

Deployment patterns

Three deployment patterns cover most use cases:

Port allocation

The two roles use different network interfaces and ports. On a single machine, assign each process its own non-overlapping ports. The default gateway and orchestrator ports stay separate on the same machine. Verify no other process is bound to them before starting both roles:
Check gateway and orchestrator ports
ss -tlnp | grep -E ':7935|:8935|:1935|:7936'

Self-routing

Self-routing is when a gateway you control routes jobs to an orchestrator you also control. Off-chain gateway to own orchestrator: Configure the gateway with -orchAddr <your-orchestrator-ip>:8935. The gateway routes all jobs to the specified address. This is explicit self-routing — the gateway sends all work to your orchestrator. On-chain gateway discovering own orchestrator: If both your gateway and orchestrator are on-chain, the gateway discovers your orchestrator through the normal protocol selection process, alongside all other active orchestrators. Your orchestrator competes on price and stake like any other. Self-routing via explicit -orchAddr is appropriate when:
  • Testing AI inference quality before serving jobs to clients
  • Running a dedicated internal service (e.g. transcoding your own content)
  • You want guaranteed routing to your own infrastructure without depending on protocol selection
Self-routing through on-chain discovery remains competitive. Your orchestrator still has to win on price, stake, and performance to receive the job. Use an off-chain gateway with direct -orchAddr when routing must stay dedicated. Pricing alignment: The gateway’s -maxPricePerUnit (or -maxPricePerCapability for AI) must be at or above the orchestrator’s -pricePerUnit (or price_per_unit in aiModels.json). A gateway with a cap below the orchestrator’s advertised price will fail to route any jobs to it.

Pricing alignment

When you control both gateway and orchestrator, configure the gateway’s caps relative to the orchestrator’s advertised prices:
Price cap alignment rule
gateway -maxPricePerUnit  ≥  orchestrator -pricePerUnit
gateway -maxPricePerCapability  ≥  orchestrator aiModels.json price_per_unit (per pipeline)
A gateway cap exactly equal to the orchestrator price is sufficient for self-routing but leaves no margin for autoAdjustPrice adjustments (which increase the advertised price during gas spikes). Set the gateway cap 20 to 30% above the orchestrator base price to prevent job failures during gas price increases. Example video pricing alignment:
Example price alignment
# Orchestrator startup
-pricePerUnit 1000
-autoAdjustPrice=true    # may raise advertised price during gas spikes

# Gateway startup (same operator)
-maxPricePerUnit 1300    # 30% above base to absorb autoAdjustPrice headroom

Monitoring both roles

Each role produces its own Prometheus metrics. On a single machine, ensure both processes export to different ports to avoid metric collisions. Orchestrator metrics to watch:
  • livepeer_transcode_duration_seconds — transcoding latency
  • livepeer_winning_ticket_count — PM ticket win frequency
  • livepeer_reward_call_success — reward call outcome per round
Gateway metrics to watch:
  • livepeer_broadcaster_sessions_total — active inbound sessions
  • livepeer_broadcaster_upload_errors — upload failures (may indicate orchestrator issues)
For production deployments running both roles, maintain separate log streams per process to avoid interleaving:
Separate log streams per role
# Two separate systemd units or Docker containers
# Gateway logs
journalctl -u livepeer-gateway -f

# Orchestrator logs
journalctl -u livepeer-orchestrator -f
Last modified on March 16, 2026