Skip to main content
By default, go-livepeer runs the Orchestrator and Transcoder as a single combined process on one machine. The split setup separates them: one machine handles protocol operations (on-chain interactions, job routing, reward calling) and one or more machines handle the GPU work. The two connect over the network using a shared secret. This is also the architectural foundation for . A pool extends the O-T split to accept connections from external workers. For a comparison of all alternate deployment options, see .

Reasons to Split

Security isolation

The Ethereum keystore lives only on the Orchestrator machine. GPU worker machines have no wallet access. A compromised worker cannot drain funds or perform on-chain actions.

Independent scaling

Add or remove Transcoder machines without touching the Orchestrator. Scale GPU capacity by connecting more Transcoder nodes - each reports its own capacity to the Orchestrator.

Stable reward calling

The Orchestrator machine is usually a small stable VPS with no GPU. Reward calls come from this machine, independent of GPU machine availability.

Role-optimised hardware

Optimise the Orchestrator for fast CPU, reliable network, and stable uptime. Optimise Transcoder machines purely for GPU throughput.

Architecture

Data flow:
  1. A Gateway connects to the Orchestrator on port 8935 (the public service URI)
  2. The Orchestrator receives the job and dispatches it to an available connected Transcoder via gRPC
  3. The Transcoder processes the segment and returns results to the Orchestrator
  4. The Orchestrator returns results to the Gateway
The protocol sees only the Orchestrator. Transcoders stay behind that public endpoint.

Part 1 - Orchestrator Machine

The Orchestrator machine needs a publicly accessible IP or hostname, an Ethereum keystore, and outbound access to an Arbitrum RPC endpoint. GPU hardware stays on the Transcoder machines.
Orchestrator machine command
livepeer \
    -network arbitrum-one-mainnet \
    -ethUrl <ARBITRUM_RPC_URL> \
    -ethAcctAddr <YOUR_ETH_ADDRESS> \
    -orchestrator \
    -orchSecret <ORCH_SECRET> \
    -serviceAddr <YOUR_PUBLIC_HOST>:8935 \
    -pricePerUnit <PRICE_PER_UNIT>
Key flags for the Orchestrator-only process: With -orchestrator alone, go-livepeer runs in standalone Orchestrator mode. It routes jobs to connected Transcoders and performs no local transcoding. Job assignments start once at least one Transcoder connects.
Pass -orchSecret as a file path for production setups. Plaintext secrets remain visible in the process list via ps aux.
Store orchSecret in a file
echo "my-secret-value" > /etc/livepeer/orchsecret.txt
chmod 600 /etc/livepeer/orchsecret.txt
# then: -orchSecret /etc/livepeer/orchsecret.txt

Part 2 - Transcoder Machines

Each Transcoder machine needs an NVIDIA GPU with drivers installed and network connectivity to the Orchestrator on port 8935. Ethereum account management, LPT stake, and Arbitrum RPC stay on the Orchestrator machine.
Transcoder machine command
livepeer \
    -transcoder \
    -nvidia <GPU_IDs> \
    -orchSecret <ORCH_SECRET> \
    -orchAddr <ORCHESTRATOR_HOST>:8935 \
    -maxSessions <MAX_SESSIONS>
Key flags for the Transcoder-only process:

Verifying the connection

When the Transcoder connects successfully, the Orchestrator logs show:
Successful transcoder registration
Got a RegisterTranscoder request from transcoder=10.3.27.1 capacity=10
The capacity field reflects the Transcoder’s -maxSessions value. Once this line appears, the Orchestrator begins routing jobs to the connected Transcoder.

Connecting Multiple Transcoders

Any number of Transcoders connect to a single Orchestrator using the same -orchSecret. Each connection appears in Orchestrator logs:
Multiple transcoder registrations
Got a RegisterTranscoder request from transcoder=10.3.27.1 capacity=10
Got a RegisterTranscoder request from transcoder=10.3.27.2 capacity=8
Got a RegisterTranscoder request from transcoder=10.3.27.3 capacity=12
The Orchestrator distributes incoming job segments across all connected Transcoders automatically. The effective session capacity is the sum of all connected Transcoder capacities. In the example above, that means 30 concurrent sessions. New Transcoders are added at any time, and the Orchestrator routes to them immediately.

Relationship to Pool Operations

The O-T split and a worker pool are the same architecture. The difference is operational scope: For pool operations, including external worker connections and off-chain fee distribution, see .

Security Considerations

The orchSecret is the only authentication between Orchestrator and Transcoder. Any node with this secret is able to connect as a Transcoder and receive job assignments. Keep it private: leave it out of public Docker images, public configuration files, and version control. Use file-based secrets with restricted permissions.
In a correctly configured split setup, Transcoder machines stay keystore-free and run with workload-only flags. Only the Orchestrator submits on-chain transactions. Keep GPU worker machines dedicated to workload processing and leave keystores on the Orchestrator side.
Port 8935 must be publicly accessible for both Gateway and Transcoder connections. Gateways connect inbound to route jobs; Transcoders connect inbound to register and receive work. Firewall rules therefore need to allow inbound TCP on port 8935.
A compromised -orchSecret requires immediate rotation: generate a new secret, update the Orchestrator launch command, communicate the new secret to all Transcoder operators, then restart the Orchestrator. Existing Transcoder connections drop during the restart and reconnect automatically with the new secret. Rotation requires a short reconnection window.

Troubleshooting

Primary endpoints used in this setup:
  • Status endpoint: https://<orchestrator-host>:8935/status
  • Metrics endpoint: http://localhost:7935/metrics
Use the status endpoint to confirm the public Orchestrator is responding and the metrics endpoint to inspect local workload and capacity counters before opening the accordion checks below.
Check in order:
  1. Verify port 8935 is reachable from the Transcoder: curl -v https://<orchestrator-host>:8935/status
  2. Confirm -orchSecret matches exactly on both sides (case-sensitive)
  3. For HTTPS deployments, confirm the TLS certificate chain is trusted by the Transcoder host
  4. Check Transcoder startup logs for the GPU test result. A GPU test failure exits the process before it connects
Once Got a RegisterTranscoder request appears in Orchestrator logs, the Transcoder is connected and eligible for work. When jobs are arriving at the Orchestrator and the Transcoder still stays idle:
  • Check whether the Transcoder’s -maxSessions capacity is already reported as fully used
  • Check the Orchestrator metrics endpoint
  • An idle Orchestrator usually points to Gateway routing - see
The Transcoder’s GPU startup test failed, typically because the NVENC session cap has been reached on that GPU. See the GPU and memory errors section of the .
Last modified on March 16, 2026