Choose the tab that matches the intended workload. All three modes use the same binary and differ only by flags. Keep Arbitrum connection and staking flags for the connect-and-activate step.
Build the docker-compose file that will carry the node through the rest of setup. Pick one operating mode: video transcoding, AI inference, or dual mode.
Common configuration
Networking
The -serviceAddr flag declares the public address gateways connect to. This address must be reachable from the internet:
# IP address
-serviceAddr 203.0.113.42:8935
# Domain name (preferred - survives IP changes without re-registration)
-serviceAddr orch.yourdomain.com:8935
Port 8935 must be open inbound on the firewall. Test reachability from a different machine before connecting on-chain:
curl -k https://YOUR_PUBLIC_IP:8935/status
Any response (including a JSON error) confirms the port is open. A connection timeout means the firewall is blocking it.
Port 7935 (-cliAddr) is for local CLI access and Prometheus metrics. Keep it bound to localhost unless external monitoring is configured.
GPU selection
This lists GPU device IDs:
GPU 0: NVIDIA GeForce RTX 4090 (UUID: GPU-...)
GPU 1: NVIDIA GeForce RTX 3090 (UUID: GPU-...)
Use the device index in -nvidia:
-nvidia 0 # Single GPU
-nvidia 0,1 # Two GPUs
-nvidia all # All available GPUs
For mixed GPU configurations (different VRAM sizes), assign specific GPU IDs to specific pipelines in aiModels.json using the gpu field.
Pricing
-pricePerUnit sets video transcoding price in wei per pixel (not ETH). A typical starting range is 500-2,000 wei per pixel. Set it below 1,000 wei per pixel initially and adjust based on job volume.
For AI pricing, set price_per_unit per pipeline in aiModels.json. Values are in wei per output pixel (or per ms for audio, per token for LLM).
For competitive positioning guidance, see .
Persistent data
Always mount the data directory so the keystore and data survive container restarts:
volumes:
- ~/.lpData:/root/.lpData
Without this mount, go-livepeer creates a new Ethereum account on every container start, losing the previous keystore and all bonded LPT.
For AI workloads, also mount the models directory:
volumes:
- ~/.lpData:/root/.lpData
- /var/run/docker.sock:/var/run/docker.sock
The aiModelsDir must be a host path. Docker mounts that path into AI runner containers.
Verify configuration before connecting on-chain
Start the node and confirm it initialises cleanly before proceeding to the Connect step:
Check for clean startup:
docker compose logs -f 2>&1 | grep -iE "gpu|transcode|ai-runner|error|FATAL" | head -20
Expected for video mode:
Transcoding on Nvidia GPU 0
Listening for RPC on :8935
Expected for AI mode (after model download):
Starting AI worker
Warm model loaded: ByteDance/SDXL-Lightning
Resolve FATAL or repeated error lines before connecting on-chain.
Next step