- H.264 levels reference: Wikipedia AVC levels
- NVIDIA GPU support matrix: NVIDIA video encode/decode support matrix
- Arbiscan: arbiscan.io
- Explorer orchestrator list: explorer.livepeer.org/orchestrators
- Cloud SPE AI registry: tools.livepeer.cloud
- Ollama version endpoint:
http://localhost:11434/api/version - Example local and public service URIs:
https://127.0.0.1:4433,https://121.5.10.8:8935,https://121.5.10.8:8935;,https://<your-service-uri>:8935/status,https://orch.yourdomain.com:8935
Transcoding errors
OrchestratorCapped — node not accepting work
OrchestratorCapped — node not accepting work
- Check your current session count against your configured limit:
Check current session limit
- If you have spare GPU capacity, increase
-maxSessionsin your launch command - If you are already at GPU VRAM limits, you cannot safely increase sessions — you need to reduce the model size, reduce output dimensions, or add GPU capacity
- In a split setup, verify both the transcoder machine and the orchestrator have capacity headroom
Transcode loop timed out / Segment loop timed out
Transcode loop timed out / Segment loop timed out
MB rate > Level limit warning
MB rate > Level limit warning
Unable to transcode errors
Unable to transcode errors
Unsupported input pixel format
Unsupported input pixel format
GPU and memory errors
Cannot allocate memory (on startup with -nvidia flag)
Cannot allocate memory (on startup with -nvidia flag)
- Check what is using NVENC sessions on the GPU:
nvidia-smi - Stop any processes consuming NVENC sessions (video encoding software, other Livepeer processes)
- If you need more concurrent sessions than your consumer GPU allows, look into driver patching (the
nvenc-patchapproach) or upgrade to a data centre class GPU (RTX A-series or above) which has no session cap
CUDA out of memory (AI inference)
CUDA out of memory (AI inference)
- Check current VRAM usage:
nvidia-smi --query-gpu=memory.used,memory.free --format=csv - Reduce the
capacityvalue in youraiModels.jsonfor the affected pipeline - If you have multiple warm models loaded, consider making some cold (remove from warm list) to free VRAM for others
- Check whether you have set dimensions in your inference requests that exceed what your VRAM can handle
Node not using GPU for transcoding despite -nvidia flag
Node not using GPU for transcoding despite -nvidia flag
-nvidia.Checklist:- Run
nvidia-smi— confirm the GPU is visible to the OS - Check go-livepeer startup logs for a GPU detection line
- Verify
LD_LIBRARY_PATHincludes CUDA shared libraries if not installed to/usr/local/cuda - Confirm NVIDIA Container Toolkit is installed if you are running in Docker:
docker run --gpus all nvidia/cuda:11.0-base nvidia-smi
go-livepeer/doc/gpu.md: if the CUDA location differs from /usr/local/cuda, set LD_LIBRARY_PATH=<path-to-cuda> when launching.Reward and gas errors
insufficient funds for gas * price + value
insufficient funds for gas * price + value
- Check your orchestrator wallet ETH balance on Arbiscan — bridge or transfer ETH to it on Arbitrum One
- As a preventive measure, keep at least 0.02–0.05 ETH in your orchestrator wallet at all times
- If you are using OrchestratorSiphon, configure
eth_warnandeth_minvalthresholds to receive warnings before the wallet goes dry
Node still calling reward despite -reward=false
Node still calling reward despite -reward=false
-reward=false but your node is still submitting reward transactions and spending gas.Why this happens: If you are running the orchestrator and transcoder as separate processes (split setup), -reward=false must be set in every launch command. A transcoder process running with the same Ethereum wallet and a separate config may be calling reward independently.How to fix it:- Audit all running
livepeerprocesses:ps aux | grep livepeer - Add
-reward=falseto every launch command - As an extra precaution, remove the
-ethUrloption from any transcoder process that shares the same wallet. Without an ETH URL, the transcoder cannot submit on-chain transactions at all. - When using a
.conffile for configuration, the command-line flag overrides the file. Always pass-reward=falseexplicitly at launch.
Missing reward rounds despite -reward=true
Missing reward rounds despite -reward=true
-reward=false, but the Explorer shows missed rounds.Diagnose in order:- ETH balance — low balance causes reward calls to fail silently. Check
http://localhost:7935/statusfor ETH balance or look at Arbiscan. - Node was offline — a node down at the round boundary (~every 22 hours on Arbitrum) misses the call. Check your systemd or service uptime logs.
- Multiple processes competing — two go-livepeer processes sharing the same wallet can submit a failing duplicate.
TicketParams expired
TicketParams expired
AI runner errors
AI runner container not starting
AI runner container not starting
Wrong model ID — model fails to load
Wrong model ID — model fails to load
aiModels.json does not resolve to a valid model.Model IDs are case-sensitive and must include the organisation prefix. For example:- ✓
stabilityai/stable-diffusion-xl-base-1.0 - ✗
stable-diffusion-xl-base-1.0(missing org prefix) - ✗
StabilityAI/stable-diffusion-xl-base-1.0(wrong case)
llama3:8b) in aiModels.json. Use the HuggingFace model ID format instead.AI pipeline registered but receiving no jobs
AI pipeline registered but receiving no jobs
- Outside the active set — check Livepeer Explorer. Nodes outside the top 100 by stake receive no transcoding or AI jobs.
- Price too high — your per-capability price exceeds the gateway’s
-maxPricePerCapabilitylimit. Check market rates on tools.livepeer.cloud and compare with other operators. - Model is cold — the warm model is not loaded in VRAM. Jobs may time out before the model loads, and cold starts often take 30 to 120 seconds. Make sure your intended warm model is listed in the
warmsection ofaiModels.json. - Capability not registered — query your node’s registered capabilities:
curl http://localhost:7935/getNetworkCapabilities | jq. If the pipeline is missing, check youraiModels.jsonconfiguration and that the AI runner started successfully.
OOM during AI inference (job starts, then fails)
OOM during AI inference (job starts, then fails)
- Reduce the
capacityvalue for the pipeline inaiModels.json - Reduce
maxSessionsfor AI inference specifically - If the OOM happens for large output requests but not small ones, consider whether you need to restrict accepted request dimensions at the gateway level
ComfyStream container failing (live-video AI)
ComfyStream container failing (live-video AI)
- Model weights are downloaded and accessible at the path configured in your ComfyStream setup
- CUDA toolkit version matches what the container expects — check
nvidia-smifor your driver version and confirm container CUDA compatibility - The container has sufficient VRAM for the workflow — live-video streaming requires the model to remain resident during the stream
- Check container logs for the specific Python exception:
docker logs -f <comfystream-container>
LLM pipeline (Ollama) not receiving jobs
LLM pipeline (Ollama) not receiving jobs
- Verify Ollama container is running and accessible:
curl http://localhost:11434/api/version - Confirm go-livepeer can reach the Ollama endpoint. Containerised deployments usually need both services on the same Docker network.
- Re-register the LLM capability: restart go-livepeer to force capability re-advertisement
- Check that the model ID in
aiModels.jsonmatches an installed Ollama model:ollama list
Networking and connectivity
Service address mismatch warning at startup
Service address mismatch warning at startup
- If your IP changed: update your on-chain service URI using
livepeer_cli - If your IP is correct but different from what the node auto-detects: override with
-serviceAddr <public-ip>:8935 - Confirm your node is actually reachable at that address:
curl -v https://<your-service-uri>:8935/status
Node not receiving any jobs despite being in active set
Node not receiving any jobs despite being in active set
Running behind a NAT or home router
Running behind a NAT or home router
- Port forwarding — forward port 8935 on your router to your node’s local IP. This is the standard approach.
- DMZ — place your node in the router’s DMZ to receive all unsolicited inbound traffic. Less secure but simpler.
-
Hairpinning (if needed) — some networks require iptables rules to handle internal-to-external traffic loops:
Example hairpinning rule
What is the service URI — can it be a hostname?
What is the service URI — can it be a hostname?
- IP address:
https://121.5.10.8:8935— static IPs are preferred for consistency - DNS name:
https://orch.yourdomain.com:8935— allowed and recommended if you want flexibility to change the underlying IP without an on-chain transaction
livepeer_cli. Use DNS if you anticipate IP changes.Account and keystore errors
Error creating Ethereum account manager
Error creating Ethereum account manager
~/.lpData/arbitrum-one-mainnet/keystore/How to fix it:- List files in your keystore directory:
ls -la ~/.lpData/<network>/keystore/ - Confirm the file matching your
-ethAcctAddris present (UTC— prefixed JSON file) - Check file permissions — the keystore file should be readable by the user running go-livepeer:
chmod 600 <keystore-file> - If you used a different
-datadirin the past, the keystore may be under a different path. Locate it and copy it to the correct location.