Troubleshooting — Installation and GPU Detection
GPU not detected — the -nvidia flag returns no devices
GPU not detected — the -nvidia flag returns no devices
Running
livepeer -nvidia all produces no output or logs an error indicating no NVIDIA devices were found.CauseThe NVIDIA driver version installed on the host is below the minimum required by the current go-livepeer release, or the NVIDIA Container Toolkit is not installed when running in Docker.Fix
-
Confirm your driver version:
The output shows
Driver Version. -
If the driver is below the minimum, update it:
-
If running via Docker, verify the NVIDIA Container Toolkit is installed:
If this command fails, install the toolkit:
-
Re-run go-livepeer with the
-nvidiaflag, passing specific GPU IDs:Use-nvidia allto target all available GPUs.
"OrchestratorCapped" error
"OrchestratorCapped" error
Your orchestrator logs show
OrchestratorCapped and stops accepting new jobs from gateways.CauseYour orchestrator has reached its session limit. This can be caused by the
-maxSessions flag being set too low, or by hitting the hardware NVENC/NVDEC session limit on your GPU.FixIf the error appears during normal operation (not at startup):-
Increase the session limit via
livepeer_cli:Select the option to update the maximum number of sessions and set a higher value. Or set it as a startup flag:
-nvidia flag, the GPU itself has reached its hardware encoding/decoding session limit. Different NVIDIA GPU models have different limits — consumer cards (GTX/RTX series) typically cap at 3–5 concurrent NVENC sessions. Search nvenc nvdec session limit <your GPU model> to find the limit for your card.Binary not found after download
Binary not found after download
After downloading the go-livepeer binary, running
livepeer in the terminal returns command not found.CauseThe binary is not in a directory that is on your system
PATH, or the file permissions do not allow execution.Fix-
Make the binary executable:
-
Move the binaries to a directory on your
PATH: -
Verify:
CUDA mismatch or CUDA library not found
CUDA mismatch or CUDA library not found
go-livepeer starts but logs a CUDA error, or GPU transcoding fails immediately with a CUDA library error.Cause
The CUDA version installed on your host does not match the version that go-livepeer was compiled against for the current release.Fix
-
Check your CUDA version:
- Check the go-livepeer release notes for the current release to identify the required CUDA version.
- If using Docker, pull the official go-livepeer image which bundles the correct CUDA version instead of relying on a host CUDA installation.
Troubleshooting — Networking and Connectivity
"Service address did not match discovered address"
"Service address did not match discovered address"
On startup, your orchestrator logs:
When starting up, go-livepeer checks whether the current public IP matches the address stored on the Livepeer blockchain from your registration. If your server IP has changed, or if you registered with the wrong address, the check fails.Your node may still start, but gateways cannot route jobs to you if the on-chain address is unreachable.Fix
- Identify the address currently stored on-chain. You can find this on the Livepeer Explorer — search for your orchestrator’s ETH address.
-
If the stored address is wrong, update it via
livepeer_cli:Select the option to update your service URI, and enter the correcthttps://YOUR_PUBLIC_IP:8935(or your domain name if you registered with one). Note: Updating a service URI requires a blockchain transaction and costs ETH for gas. -
If the address is correct but your IP has changed since registration, either update the on-chain registration or override the local check at startup:
orch.yourdomain.com:8935) instead of a bare IP address when registering your service URI. Domain names are easier to update if your server IP changes without requiring a new on-chain registration.Port 8935 not reachable from outside your network
Port 8935 not reachable from outside your network
Your orchestrator starts without errors, but no gateways connect, and the
Service address did not match error does not appear. Running an external port check against your IP:8935 shows the port as closed.CausePort 8935 (the default orchestrator port) is blocked by a firewall, a router NAT, or a cloud provider security group. Gateways discover your orchestrator address from the blockchain but cannot reach you.Fix
-
Ensure port 8935 is open in your server’s firewall:
- If your server is behind a home router or NAT, configure port forwarding on your router to forward external port 8935 to your server’s local IP on port 8935.
- If running in a cloud provider (AWS, GCP, Hetzner, etc.), check and update your security group or firewall rules to allow inbound TCP on port 8935.
-
Verify the port is externally reachable:
A response (even an error response) confirms the port is reachable.
Arbitrum RPC connection failing — node will not start
Arbitrum RPC connection failing — node will not start
go-livepeer fails to start or logs repeated connection errors to the Arbitrum RPC endpoint. Example log patterns:
dial tcp: connection refused, context deadline exceeded, or could not retrieve chain ID.CauseThe
-ethUrl value is incorrect, the RPC endpoint is rate-limited, or the provider (Alchemy, Infura, etc.) is down or requires API key renewal.Fix-
Verify your
-ethUrlvalue is an Arbitrum One RPC endpoint, not an Ethereum mainnet endpoint:- Arbitrum One endpoint format:
https://arb-mainnet.g.alchemy.com/v2/YOUR_API_KEY - Do not use an Ethereum L1 endpoint — go-livepeer operates on Arbitrum.
- Arbitrum One endpoint format:
-
Test the endpoint independently:
Arbitrum One should return chain ID
0xa4b1(42161 in decimal). - If rate-limited, switch to a different RPC provider or upgrade your plan. Free-tier RPC endpoints from Alchemy and Infura have request limits that can be hit by active orchestrators.
- Check your API key has not expired or been rotated.
Troubleshooting — Not Receiving Jobs
Not in the active orchestrator set (top 100 by stake)
Not in the active orchestrator set (top 100 by stake)
Your orchestrator is running, the port is reachable, but you receive no transcoding or AI jobs. Your address does not appear on the Livepeer Explorer active orchestrators list.Cause
The active orchestrator set is limited to the top 100 orchestrators by total LPT stake (self-delegated plus delegated). If your stake is below the 100th orchestrator’s stake, you are not in the active set and gateways will not route jobs to you.Fix
- Go to explorer.livepeer.org and find the 100th orchestrator in the list. Note their total stake — that is the current minimum.
-
To enter the active set, your total stake must exceed that figure. You can:
- Self-delegate additional LPT
- Attract more delegators by adjusting your reward cut to be competitive
-
Once your stake places you in the top 100, your orchestrator must be activated or re-activated via
livepeer_cli:Select the multi-step “become an orchestrator” option.
pricePerUnit set too high — gateways are not selecting your node
pricePerUnit set too high — gateways are not selecting your node
Your orchestrator is in the active set, is reachable, and is activated — but you receive very few or no jobs. No error messages appear in your logs.Cause
-pricePerUnit is set too high relative to competing orchestrators. Gateways select orchestrators based on a combination of stake weight and price. If your price is significantly above the market rate, gateways will route jobs to lower-priced orchestrators first.Fix- Check the current market rate at livepeer.tools — this shows per-pixel pricing for active orchestrators.
-
Note the unit:
-pricePerUnitis set in wei per pixel, not ETH. A common misconfiguration is setting the value in ETH (e.g.0.0001) instead of the equivalent wei value, which produces a price orders of magnitude too high. A typical starting value for transcoding is in the range of a few hundred to a few thousand wei per pixel. Check the Explorer for comparable orchestrators’ pricing. -
Update your price via
livepeer_cli:Select the option to update your price per unit and enter the new value in wei. Or restart your node with the updated flag:
Node not activated via livepeer_cli
Node not activated via livepeer_cli
Your orchestrator is running, the port is reachable, and your stake is sufficient — but you still receive no jobs. Your address does not appear in the active orchestrators list on the Explorer.Cause
go-livepeer running in orchestrator mode does not automatically register or activate the node on the Livepeer protocol. Activation requires a one-time on-chain transaction via
livepeer_cli.Fix-
With your orchestrator running, open a second terminal and run:
- Select the option to invoke the multi-step “become an orchestrator” flow.
-
You will be prompted to set:
- Reward cut (percentage of LPT inflation you keep; the remainder goes to delegators)
- Fee cut (percentage of ETH fees you keep)
- Price per unit (in wei per pixel)
- Service address (your public IP:port)
- Amount of LPT to stake (in LPTU — note: 1 LPT = 1,000,000,000,000,000,000 LPTU)
- Each step submits an on-chain transaction. You need ETH on Arbitrum to pay gas.
- After completion, verify your orchestrator appears on explorer.livepeer.org.
serviceAddr not externally reachable after activation
serviceAddr not externally reachable after activation
Your orchestrator is activated and appears on the Explorer, but still receives no jobs. Port check against your registered IP:port shows the port as closed or unreachable.Cause
Your service address is registered on-chain, but the actual server at that address is not reachable — either a firewall rule was added after registration, the server moved to a different IP, or a network change broke external access.FixSee the Port 8935 not reachable entry above for step-by-step networking checks.If your IP address has changed, update your on-chain service URI via
livepeer_cli. This costs ETH for gas.Troubleshooting — AI Pipeline Errors
AI Runner container not starting
AI Runner container not starting
Your AI Runner Docker container exits immediately or fails to start. Docker logs show errors such as
CUDA error, OOM, device not found, or a port binding failure.CauseCommon causes: the NVIDIA Container Toolkit is not installed or configured; the GPU has insufficient VRAM for the loaded model; the container image tag does not match the go-livepeer version; a port conflict on the host.Fix
-
Verify Docker can see your GPU:
If this fails, install and configure the NVIDIA Container Toolkit:
-
Check your container logs for the specific error:
- If the error is out-of-memory (OOM): the model you are loading requires more VRAM than available on the GPU. Either load a smaller model or use a higher-VRAM GPU.
- Ensure you are using the AI Runner image that corresponds to your go-livepeer release version. Mismatched versions can cause silent container failures.
-
If the port is already in use, change the host port binding in your Docker run command:
Update the corresponding
urlfield inaiModels.jsonto match.
aiModels.json errors — AI jobs not being received
aiModels.json errors — AI jobs not being received
Your AI Runner container is running, but your orchestrator does not receive AI inference jobs. No AI-related errors appear in your orchestrator logs.Cause
The most common cause is an incorrect or missing
aiModels.json configuration. If go-livepeer cannot load a valid pipeline configuration, it will not advertise AI capabilities to gateways and will not receive AI jobs.Fix-
Verify your
aiModels.jsonis correctly formatted. The minimum required fields for each entry are:Field reference:pipeline— the inference task type (e.g.text-to-image,image-to-image,llm). Must match a supported pipeline name.model_id— the Hugging Face model ID. Must be in the Livepeer-verified model list.warmloads the model on GPU in advance. If set tofalse, the model loads on first request (slower, but uses no VRAM until needed).price_per_unit— price in wei per unit (unit definition varies by pipeline — for image pipelines, this is per pixel).pixels_per_unit— typically1.
-
If using an external container (e.g. a custom AI Runner or Ollama), add the
urlfield pointing to the running container: -
Validate that your AI Runner container is running and reachable at the
urlyou specified: -
Restart go-livepeer after modifying
aiModels.json— changes to this file are not hot-reloaded.
Model fails to load — VRAM or memory errors
Model fails to load — VRAM or memory errors
The AI Runner container starts, but attempting to serve an inference job produces an out-of-memory error or the job fails immediately.Cause
The model you have configured requires more VRAM than your GPU has available, or other models are consuming VRAM and leaving insufficient headroom.Fix
-
Check GPU memory usage:
-
If running multiple warm models simultaneously, the total VRAM requirement is additive. Consider setting some models to
"warm": falseto load them on demand instead of preloading. - For the LLM pipeline specifically, the Cloud SPE Ollama runner supports GPUs with as little as 8 GB VRAM (using quantised model weights). Diffusion models (text-to-image, image-to-video) typically require 16 GB+ VRAM for full-precision models.
-
Enable the
DEEPCACHEorSFASToptimisation flags to reduce VRAM usage and improve throughput for diffusion models:
AI jobs routed to wrong pipeline or model
AI jobs routed to wrong pipeline or model
You are receiving AI jobs, but they fail or return errors related to pipeline mismatch. Or: you have configured multiple pipelines but only one ever receives jobs.Cause
Gateways route jobs based on the pipeline and model ID advertised in your
aiModels.json. If the requested model ID does not match an entry in your config, the job is rejected. If your model is not in the Livepeer-verified model registry, it will not be matched to gateway requests.Fix-
Ensure the
model_idvalues in youraiModels.jsonexactly match the Hugging Face model IDs in the Livepeer-verified model list. - If you want to run a model not on the approved list, submit a feature request on the go-livepeer GitHub repository to have it verified and added.
-
Verify your orchestrator is advertising all configured pipelines. Check the Explorer or query your orchestrator’s status endpoint:
Troubleshooting — Earnings and Payments
Earnings not appearing in the Explorer
Earnings not appearing in the Explorer
Your orchestrator is running and processing jobs, but earnings are not updating on explorer.livepeer.org.Cause
There are two types of earnings on Livepeer: transcoding fees (ETH, paid per job) and staking rewards (LPT, minted per round). Explorer indexing can have a delay of a few minutes to hours. Additionally, staking rewards require a reward call to be made each round.Fix
- Allow up to 24 hours for Explorer indexing before raising an issue — especially for new orchestrators.
- For staking rewards specifically: these are only minted if a reward call is made during the current round. If no reward call is made, no LPT is minted for that round. Check your logs for reward call transactions.
-
Verify you are checking the correct network. go-livepeer operates on Arbitrum One. If you have connected to a testnet (
rinkeby,arbitrum-one-rinkeby) your earnings will show on Explorer for that network only.
Reward call not being made — missing LPT staking rewards
Reward call not being made — missing LPT staking rewards
Your orchestrator is active and receiving jobs, but LPT staking rewards are not accumulating. No reward call transactions appear in your wallet’s Arbitrum transaction history.Cause
If you run your orchestrator process and transcoder process as separate instances (split O+T setup), and the wrong process has the
-reward flag enabled, reward calls may not be made correctly. Also: if ETH balance on Arbitrum is insufficient, reward call transactions will fail.Fix-
For split orchestrator/transcoder setups, add
-reward=falseto all transcoder launch commands. Only the orchestrator process should make reward calls:Also remove the-ethUrloption from transcoder processes if they are using the same wallet — this prevents the transcoder from inadvertently submitting on-chain transactions. -
Ensure your Arbitrum ETH balance is sufficient to pay for reward call gas. Check via
livepeer_clior on arbiscan.io. - If running a single combined orchestrator+transcoder process (the default for most solo operators), reward calls are handled automatically. No additional configuration is needed.
Pool worker earnings not showing
Pool worker earnings not showing
You are contributing compute to an orchestrator pool (e.g. Titan Node, Video Miner, LivePool) as a transcoder worker, but you cannot see your earnings.Cause
Pool earnings are tracked and distributed by the pool operator, not by go-livepeer directly. The Livepeer Explorer shows the pool orchestrator’s earnings, not individual worker earnings. Each pool has its own payout mechanism and reporting interface.Fix
- Check the pool operator’s dashboard or reporting tool — each pool provides its own interface.
- If earnings are not updating there, contact the pool operator via their Discord or forum. Pool-specific issues are outside Livepeer’s core go-livepeer documentation.
- Verify your transcoder process is running correctly and connected to the pool’s orchestrator address by checking your logs for successful job processing messages.
FAQ — General Questions
What is the difference between an orchestrator and a transcoder?
What is the difference between an orchestrator and a transcoder?
-orchestrator -transcoder flags. In more advanced setups, operators run one orchestrator process that delegates work to multiple transcoder processes — potentially across different machines.Do I need to keep livepeer_cli running after activation?
Do I need to keep livepeer_cli running after activation?
livepeer_cli is an interactive management tool. You use it to perform one-time or occasional on-chain actions (activate as orchestrator, update pricing, update service address, manage stake). Once an action is submitted, livepeer_cli can be closed.The livepeer daemon process is what needs to remain running continuously to receive and process jobs.How long does it take to appear in the active orchestrator set after activation?
How long does it take to appear in the active orchestrator set after activation?
What is the minimum LPT stake required to receive jobs?
What is the minimum LPT stake required to receive jobs?
Can I run an orchestrator on Windows?
Can I run an orchestrator on Windows?
What does "Transcode loop timed out" or "Segment loop timed out" mean in my logs?
What does "Transcode loop timed out" or "Segment loop timed out" mean in my logs?
What does the "ticket parameters" error mean?
What does the "ticket parameters" error mean?
Can I run an orchestrator from a home network?
Can I run an orchestrator from a home network?
- Your orchestrator must be publicly reachable on port 8935. Most home routers require you to configure port forwarding to expose a specific machine on your local network to the internet.
- Home broadband often uses dynamic IP addresses. If your IP changes, your on-chain service URI becomes stale and gateways cannot reach you. Use a dynamic DNS service to maintain a stable hostname, then register with that hostname instead of a bare IP.
- Home broadband upload speeds may limit your transcoding throughput. Each video segment must be received from a gateway, transcoded, and returned — upload capacity directly affects how many concurrent sessions you can handle.