How a pool works
The Livepeer protocol sees only one entity: your orchestrator. It has a single on-chain address, a single stake, and a single service URI. Everything behind that address is your architecture to design. In a pool, the orchestrator node accepts connections from remote transcoders (workers). When a gateway routes a job to your orchestrator, go-livepeer dispatches it to an available worker via gRPC streaming RPC. Workers process the segment and return results — the orchestrator handles all protocol-level interaction, the workers handle the compute. Workers have no on-chain presence. Delegators and Explorer see only your orchestrator identity, and all stake, protocol reputation, and on-chain fees flow through that address.Worker connection models
Accepting workers
To configure your orchestrator to accept remote worker connections, run without-transcoder but with -orchSecret:
Accept remote workers on the orchestrator
-orchSecretis a shared secret that authenticates worker connections. Any node that knows this secret can connect as a worker. Treat it like a password.-transcoderis omitted. This puts the orchestrator in standalone mode: it handles gateway connections and routing, but does no local transcoding. All jobs go to connected workers.- Port 8935 must be open for both inbound gateway connections and inbound worker connections.
Load the shared secret from a file
capacity field is the worker’s -maxSessions value — how many concurrent jobs it can handle.
Fee distribution
Fee distribution in a Livepeer pool is entirely off-chain. The protocol pays all fees and rewards to your orchestrator’s Ethereum address. There is no protocol mechanism to split payments to workers automatically. Pool operators implement their own payout systems.
On-chain identity and transparency
Your pool is represented entirely by your orchestrator’s Ethereum address. On Livepeer Explorer:- Delegators see your total stake, reward cut, fee cut, and historical performance
- Explorer shows only the orchestrator — worker-level data stays in your off-chain systems.
- Pool performance (sessions, fees) reflects aggregate work done by all connected workers combined
Ongoing operational responsibilities
Worker connection management
Worker connection management
Monitor connected workers. If a worker disconnects, your orchestrator continues accepting jobs and assigns them to remaining workers. A worker that was mid-session when it disconnected will cause that session to fail and the segment to be retried by the gateway.Workers reconnect automatically on restart. You will see a new
Got a RegisterTranscoder request log line each time. There is no manual reconnection step required on your side.NVENC session caps on consumer GPUs
NVENC session caps on consumer GPUs
Consumer NVIDIA GPUs (GTX/RTX series) have a hardware-enforced limit on concurrent NVENC encoding sessions — typically 3–8 per card depending on the model. Workers hitting this limit will reject new segments.Titan Node patches the NVIDIA driver on worker machines to remove this cap. Operators who want higher worker concurrency should either communicate this limitation early or provide driver-patching instructions.
Session routing and load balancing
Session routing and load balancing
go-livepeer distributes sessions across connected workers internally. No manual load balancing is required for basic deployments. For large pools, a load balancer in front of multiple orchestrator instances is possible — see
doc/multi-o.md in the go-livepeer repository for the multi-orchestrator architecture.Node updates and downtime
Node updates and downtime
Updating go-livepeer on the orchestrator drops all connected workers and in-flight sessions. Gateways will observe the interruption. Workers reconnect automatically after the orchestrator restarts.For pools with SLA commitments, coordinate updates during low-traffic periods and communicate planned downtime to workers in advance.
Payout and worker communication
Payout and worker communication
Establish a clear communication channel with your workers — a Discord server, Telegram group, or mailing list. Workers need timely notice of: planned downtime, payout schedule, fee changes, and how to report connection issues.Poor communication is the most common cause of worker churn in community pools.
orchSecret rotation
orchSecret rotation
If you need to rotate your
-orchSecret (for example, because you believe it has been compromised), all existing worker connections will drop immediately when the orchestrator restarts with the new secret.There is no zero-downtime rotation mechanism. Communicate the new secret to all workers before restarting the orchestrator. Workers reconnect automatically with the new secret once updated.Key facts to remember
One entity on-chain
Your pool is one orchestrator address. Workers are invisible to the protocol. All reputation, stake, and on-chain fees flow to you.
Fee distribution is your problem
The protocol does not split fees to workers. You track contributions and pay from your wallet. Build or adopt tooling before onboarding workers.
-orchSecret is the gate
Anyone with your
-orchSecret can connect as a worker and receive jobs. Keep it private. Rotate it if compromised.Workers need nothing on-chain
Workers do not need LPT, an Ethereum account for protocol purposes, or an RPC endpoint. They contribute compute only.
Join a Pool
The worker perspective — connecting your GPU to an existing pool.
Split O-T Setup
The orchestrator-transcoder split that underpins pool architecture.
Fleet Operations
Running multiple orchestrators at data-centre scale.
Earnings and Economics
Pool economics, fee strategy, and what to expect from transcoding revenue.