Reasons to Split
Security isolation
The Ethereum keystore lives only on the Orchestrator machine. GPU worker machines have no wallet access. A compromised worker cannot drain funds or perform on-chain actions.
Independent scaling
Add or remove Transcoder machines without touching the Orchestrator. Scale GPU capacity by connecting more Transcoder nodes - each reports its own capacity to the Orchestrator.
Stable reward calling
The Orchestrator machine is usually a small stable VPS with no GPU. Reward calls come from this machine, independent of GPU machine availability.
Role-optimised hardware
Optimise the Orchestrator for fast CPU, reliable network, and stable uptime. Optimise Transcoder machines purely for GPU throughput.
Architecture
Data flow:- A Gateway connects to the Orchestrator on port 8935 (the public service URI)
- The Orchestrator receives the job and dispatches it to an available connected Transcoder via gRPC
- The Transcoder processes the segment and returns results to the Orchestrator
- The Orchestrator returns results to the Gateway
Part 1 - Orchestrator Machine
The Orchestrator machine needs a publicly accessible IP or hostname, an Ethereum keystore, and outbound access to an Arbitrum RPC endpoint. GPU hardware stays on the Transcoder machines.Orchestrator machine command
-orchestrator alone, go-livepeer runs in standalone Orchestrator mode. It routes jobs to
connected Transcoders and performs no local transcoding. Job assignments start once at least one
Transcoder connects.
Pass
-orchSecret as a file path for production setups. Plaintext secrets remain visible in the
process list via ps aux.Store orchSecret in a file
Part 2 - Transcoder Machines
Each Transcoder machine needs an NVIDIA GPU with drivers installed and network connectivity to the Orchestrator on port 8935. Ethereum account management, LPT stake, and Arbitrum RPC stay on the Orchestrator machine.Transcoder machine command
Verifying the connection
When the Transcoder connects successfully, the Orchestrator logs show:Successful transcoder registration
capacity field reflects the Transcoder’s -maxSessions value. Once this line appears, the
Orchestrator begins routing jobs to the connected Transcoder.
Connecting Multiple Transcoders
Any number of Transcoders connect to a single Orchestrator using the same-orchSecret. Each
connection appears in Orchestrator logs:
Multiple transcoder registrations
Relationship to Pool Operations
The O-T split and a worker pool are the same architecture. The difference is operational scope: For pool operations, including external worker connections and off-chain fee distribution, see .Security Considerations
Protect the orchSecret
Protect the orchSecret
The
orchSecret is the only authentication between Orchestrator and Transcoder. Any node with
this secret is able to connect as a Transcoder and receive job assignments. Keep it private: leave it
out of public Docker images, public configuration files, and version control. Use file-based
secrets with restricted permissions.Transcoders stay off-chain
Transcoders stay off-chain
In a correctly configured split setup, Transcoder machines stay keystore-free and run with
workload-only flags. Only the Orchestrator submits on-chain transactions. Keep GPU worker
machines dedicated to workload processing and leave keystores on the Orchestrator side.
Port 8935 on the Orchestrator
Port 8935 on the Orchestrator
Port 8935 must be publicly accessible for both Gateway and Transcoder connections. Gateways
connect inbound to route jobs; Transcoders connect inbound to register and receive work. Firewall
rules therefore need to allow inbound TCP on port 8935.
Rotating the orchSecret
Rotating the orchSecret
A compromised
-orchSecret requires immediate rotation: generate a new secret, update the
Orchestrator launch command, communicate the new secret to all Transcoder operators, then
restart the Orchestrator. Existing Transcoder connections drop during the restart and reconnect
automatically with the new secret. Rotation requires a short reconnection window.Troubleshooting
Primary endpoints used in this setup:- Status endpoint:
https://<orchestrator-host>:8935/status - Metrics endpoint:
http://localhost:7935/metrics
Transcoder connection never reaches Orchestrator logs
Transcoder connection never reaches Orchestrator logs
Check in order:
- Verify port 8935 is reachable from the Transcoder:
curl -v https://<orchestrator-host>:8935/status - Confirm
-orchSecretmatches exactly on both sides (case-sensitive) - For HTTPS deployments, confirm the TLS certificate chain is trusted by the Transcoder host
- Check Transcoder startup logs for the GPU test result. A GPU test failure exits the process before it connects
Transcoder connects but stays idle
Transcoder connects but stays idle
Once
Got a RegisterTranscoder request appears in Orchestrator logs, the Transcoder is
connected and eligible for work. When jobs are arriving at the Orchestrator and the Transcoder
still stays idle:- Check whether the Transcoder’s
-maxSessionscapacity is already reported as fully used - Check the Orchestrator metrics endpoint
- An idle Orchestrator usually points to Gateway routing - see
Cannot allocate memory at Transcoder startup
Cannot allocate memory at Transcoder startup
The Transcoder’s GPU startup test failed, typically because the NVENC session cap has been
reached on that GPU. See the GPU and memory errors section of the
.
Related Pages
Alternate Deployments
Overview of all three alternate deployment options and how to choose between them.
Siphon Setup
Combine the split architecture with OrchestratorSiphon for keystore isolation and reward safety.
Run a Pool
Extend this architecture to accept external worker connections.
Large-Scale Operations
Fleet architecture and multi-Orchestrator operations.