Skip to main content
This is Tutorial 3 of 3.
  • Tutorial 1:
  • Tutorial 2:
This tutorial graduates from the local off-chain setup in Tutorials 1 and 2 to a live Livepeer network deployment. There are three independent upgrades - apply any or all depending on the use case. Time: 30-90 minutes depending on which upgrades you apply Cost: ETH on Arbitrum (Upgrade 1 only) What you need: Tutorial 1 and/or Tutorial 2 completed
This is Tutorial 3 of 3.

Which upgrades do you need?

The three upgrades are independent. Choose based on your persona:
Your situationUpgrade 1: On-chainUpgrade 2: GPUUpgrade 3: Network
AI app developer - self-hosting gateway for cost savings, using remote signer✗ optional✓ yes✓ yes
Gateway-as-a-Service provider - public gateway, inference fees, SPE grants✓ yes✓ yes✓ yes
SDK / alternative gateway builder✗ remote signerdepends✓ yes
Video operator - transcoding, broadcast✓ required✗ orchestrators have GPU✓ yes
Platform builder - clearinghouse / NaaP✗ clearinghouse✓ yes✓ yes
If you are running an AI gateway (BYOC, LV2V, inference workloads), Upgrade 1 is optional. The off-chain remote signer model introduced in Q4 2025 allows production AI gateway operation without holding ETH directly. If you are running a video transcoding gateway, Upgrade 1 is required - the video pathway does not support remote signers.

Upgrade 1 - On-chain registration

What this gives you

On-chain registration connects your gateway to the Livepeer protocol’s payment system on Arbitrum One:
  • Your gateway can send ETH probabilistic micropayment (PM) tickets to orchestrators
  • Orchestrators outside your explicit -orchAddr list can discover and serve your gateway
  • For video gateways: required for any production transcoding
  • For AI gateways: optional when using a remote signer, but enables full on-chain custody

1.1 - Acquire ETH on Arbitrum One

You need ETH on Arbitrum One (not Ethereum mainnet). The approximate requirement is:
AmountPurpose
PM deposit~0.065 ETHFunds payment tickets sent to orchestrators
PM reserve~0.03 ETHReserve for ticket redemption by orchestrators
Gas buffer~0.01 ETHTransaction fees on Arbitrum
Total~0.1 ETHSafe starting amount
ETH price volatility affects these amounts - a Livepeer deposit of a given USD value buys more or fewer tickets as the ETH/USD rate moves. Check current requirements in #lounge on Discord or the on-chain requirements page before depositing.
Options for getting ETH on Arbitrum One:
  • Bridge from Ethereum mainnet: bridge.arbitrum.io - official bridge, ~15 minutes
  • Buy directly on Arbitrum: Coinbase, Binance, and OKX all support direct withdrawal to Arbitrum One
  • Base → Arbitrum: if you have ETH on Base, use the Base bridge then Across Protocol
The cheapest path for most people is to buy ETH on a CEX that supports Arbitrum One withdrawals (Coinbase, Binance, OKX) and withdraw directly. This avoids the Ethereum mainnet bridge fee (~$5-20 at moderate gas) and the bridging wait time.

1.2 - Create a dedicated wallet for your gateway

Never use a personal wallet for gateway operations. Create a dedicated keystore:
./livepeer \
  -network arbitrum-one-mainnet \
  -datadir ~/.lpData-gw-prod \
  # Follow CLI prompts to initialise a new keystore
Note your gateway’s Ethereum address. You will send ETH to this address in the next step. Alternatively, use cast (from Foundry) for a clean key generation:
cast wallet new
# Save the private key and address securely - treat this like a server key
This wallet will hold ETH for gateway operations. Use a hardware wallet or a dedicated key management solution for any material amount. Never put the keystore file in a public repository.

1.3 - Deposit PM funds on-chain

Send ETH to your gateway address, then deposit using livepeer_cli:
./livepeer_cli \
  -network arbitrum-one-mainnet \
  -ethUrl YOUR_ARBITRUM_RPC_URL \
  -datadir ~/.lpData-gw-prod
From the interactive menu:
  1. Select “Deposit broadcasting funds (ETH)” Enter approximately 0.065 - this becomes your PM deposit (funds the tickets you send to orchestrators)
  2. Select “Fund reserve for PM” (if shown separately) Enter approximately 0.03 - this is the reserve orchestrators can claim from if your deposit runs out
After depositing, confirm your balance:
# In the livepeer_cli menu, select:
# "Get node status"
# Look for: "PM Deposit" and "PM Reserve" values
You can also fund your gateway using cast send directly to the Livepeer TicketBroker contract on Arbitrum One. The contract address is in the Contract Addresses reference. This is useful for scripting gateway fund management in CI/CD pipelines.

1.4 - Start your gateway on-chain

Replace -network offchain with your Arbitrum RPC:
./livepeer \
  -gateway \
  -network arbitrum-one-mainnet \
  -ethUrl YOUR_ARBITRUM_RPC_URL \
  -datadir ~/.lpData-gw-prod \
  -httpAddr 0.0.0.0:8935 \
  -httpIngest \
  -v 6
For an Arbitrum RPC endpoint, use Infura (https://arbitrum-mainnet.infura.io/v3/YOUR_KEY) or Alchemy (https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY). Both offer free tiers sufficient for a single gateway. Self-hosting an Arbitrum node is possible but unnecessary for gateway operation.
Verifying on-chain registration: Check your gateway is visible on the network:
curl http://localhost:5935/status
# Look for: "ethAddress" and "pmInfo" sections
Visit explorer.livepeer.org and search for your gateway’s ETH address. Once funded, it will appear in the protocol state.

Upgrade 2 - GPU pipelines

What this gives you

GPU acceleration enables real-time AI inference that is not possible at production throughput on CPU. The orchestrator uses NVIDIA GPUs for:
  • Standard ai-runner pipelines: text-to-image, image-to-video, LLM, upscale, etc.
  • BYOC containers with GPU-accelerated models (swap the base image)
  • Video transcoding via NVENC/NVDEC (faster than CPU libx264)
As the gateway operator, you do not need a GPU unless you are also running your own orchestrator. Gateways are routing nodes - compute lives on orchestrators. If you are self-hosting an orchestrator alongside your gateway (Tutorial 1 / 2 pattern), then yes, that orchestrator needs a GPU for AI workloads. If you are routing to public network orchestrators (Upgrade 3), their GPUs handle everything.

2.1 - GPU requirements by pipeline type

PipelineMinimum VRAMRecommended
text-to-image (SD-turbo)4 GB8 GB
image-to-image (SDXL)8 GB12 GB
image-to-video (SVD)16 GB24 GB
live-video-to-video (StreamDiffusion)8 GB16 GB
LLM (Llama-3.2-3B)6 GB12 GB
Video transcoding (NVENC)Any CUDA-
GPU VRAM requirements are per active pipeline. Running multiple concurrent pipeline types on a single GPU requires enough VRAM for the largest model loaded. Check the hardware requirements reference for the current model matrix before purchasing hardware.

2.2 - Enable GPU on the orchestrator

Add -nvidia to your orchestrator startup command:
./livepeer \
  -orchestrator \
  -network offchain \          # or arbitrum-one for production
  -serviceAddr 127.0.0.1:8936 \
  -nvidia 0 \                  # Use GPU index 0 (first GPU)
  -datadir ~/.lpData-orch
For multiple GPUs:
  -nvidia 0,1,2    # Use GPUs 0, 1, and 2
  -nvidia all      # Use all available NVIDIA GPUs
Verify GPU is detected:
# In livepeer_cli, select "Get node status"
# Look for: "GPU: NVIDIA GeForce RTX XXXX" entries

2.3 - Run your GPU BYOC container (BYOC operators only)

If you built a BYOC container in Tutorial 2, swap the base image to use CUDA:
# GPU-enabled BYOC base
FROM nvidia/cuda:12.1-runtime-ubuntu22.04

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
    python3.11 python3-pip git \
    && rm -rf /var/lib/apt/lists/*

RUN pip install --no-cache-dir \
    git+https://github.com/livepeer/pytrickle.git \
    torch torchvision --index-url https://download.pytorch.org/whl/cu121

COPY processor.py ./processor.py
EXPOSE 8000
ENTRYPOINT ["python3", "processor.py"]
Start the container with GPU access:
docker run -d \
  --name byoc-gpu \
  --network host \
  --gpus device=0 \
  byoc-gpu-pipeline:latest
The --gpus device=0 flag passes GPU 0 to the container. Use --gpus all to pass all GPUs. Requires the NVIDIA Container Toolkit installed on the host (apt install nvidia-container-toolkit).

Upgrade 3 - Network connect

What this gives you

Removing -orchAddr localhost and switching to network discovery means your gateway:
  • Routes to the full public orchestrator network (hundreds of orchestrators globally)
  • Automatically selects orchestrators by price, capability, and past performance
  • Can handle demand spikes without running your own hardware
  • Participates in the Livepeer network’s economic system

3.1 - On-chain gateway: automatic discovery

If you completed Upgrade 1, network discovery works automatically. Remove -orchAddr entirely:
./livepeer \
  -gateway \
  -network arbitrum-one-mainnet \
  -ethUrl YOUR_ARBITRUM_RPC_URL \
  -datadir ~/.lpData-gw-prod \
  -httpAddr 0.0.0.0:8935 \
  -httpIngest \
  -maxPricePerUnit 1000 \
  -v 6
The gateway queries the Arbitrum registry for registered orchestrators and selects based on stake weight (video) or capability + price (AI).
-maxPricePerUnit sets the maximum price you will pay per pixel per unit to an orchestrator - not a price you charge your customers. If an orchestrator quotes higher than this, the gateway rejects it. Start with a permissive value (e.g., 1000 wei per pixel) and lower it once you understand the typical market rate in the Explorer.

3.2 - Off-chain gateway: orchestrator list or remote signer discovery

Off-chain gateways cannot use the Arbitrum registry. You have three options: Option A - Explicit orchestrator list (simplest):
./livepeer \
  -gateway \
  -network offchain \
  -orchAddr https://orch1.example.com:8935,https://orch2.example.com:8935 \
  -httpAddr 0.0.0.0:8935 \
  -httpIngest \
  -remoteSignerAddr https://signer.eliteencoder.net
Find available AI orchestrators in the Livepeer Explorer → Orchestrators → filter by capability. Option B - Discovery endpoint: Some orchestrators publish a webhook-format discovery endpoint. Point your gateway at it:
  -orchAddr https://discovery.livepeer.cloud/orchestrators
The gateway polls this endpoint for fresh orchestrator lists on the same schedule as the on-chain webhook cadence (~1 minute). Option C - Remote signer discovery (recommended for AI gateways): The community remote signer at signer.eliteencoder.net provides a GetOrchestrators endpoint that returns on-chain orchestrator data, parameterised by capability and model ID. This removes the need to manually manage orchestrator lists:
./livepeer \
  -gateway \
  -network offchain \
  -remoteSignerAddr https://signer.eliteencoder.net \
  -httpAddr 0.0.0.0:8935 \
  -httpIngest
  # No -orchAddr needed: remote signer provides discovery
The community remote signer at signer.eliteencoder.net is operated by John (Elite Encoder) and provides free ETH for test workloads. Confirm availability in #local-gateways on Discord before using it in production. For production AI gateways, run your own remote signer using go-livepeer’s -remoteSigner mode with a dedicated funded Ethereum key.

3.3 - Set pricing

Before routing to public orchestrators, set your max price to avoid over-paying: For video transcoding:
./livepeer_cli -network arbitrum-one-mainnet -ethUrl YOUR_RPC -datadir ~/.lpData-gw-prod
# Select: "Set max price for transcoding"
# Enter: e.g. 0.01USD (auto-converts to wei using Chainlink ETH/USD feed)
For AI pipelines, use per-capability pricing: Create ai-pricing.json:
{
  "capabilities_prices": [
    {
      "pipeline": "text-to-image",
      "model_id": "stabilityai/sd-turbo",
      "price_per_unit": 1000,
      "pixels_per_unit": 1
    },
    {
      "pipeline": "live-video-to-video",
      "model_id": "streamdiffusion",
      "price_per_unit": 500,
      "pixels_per_unit": 1
    }
  ]
}
Pass to the gateway:
  -maxPricePerCapability /path/to/ai-pricing.json
Common confusion: -maxPricePerUnit and -maxPricePerCapability set the maximum price your gateway pays to orchestrators for compute. This is not the price you charge your own customers. If you are building a Gateway-as-a-Service product, your user-facing pricing is entirely separate from this network-level setting.

3.4 - Verify public orchestrator routing

Once connected to the network, verify jobs are routing to public orchestrators:
# In livepeer_cli, select "Get node status"
# Look for: "Connected Orchestrators" - should show public IP addresses, not localhost
Check the Livepeer Explorer → your gateway address → “Recent Sessions” to see which orchestrators have handled your jobs. Use tools.livepeer.cloud for a richer view of orchestrator performance, pricing, and availability before locking in your orchestrator selection strategy.

Putting it together - full production command

Here is a complete production gateway command incorporating all three upgrades (on-chain, GPU orchestrator nearby, public network):
./livepeer \
  -gateway \
  -network arbitrum-one-mainnet \
  -ethUrl https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY \
  -datadir ~/.lpData-gw-prod \
  -httpAddr 0.0.0.0:8935 \
  -httpIngest \
  -maxPricePerUnit 1000 \
  -maxPricePerCapability /etc/livepeer/ai-pricing.json \
  -livePaymentInterval 5s \
  -v 4
For a production deployment, run this under systemd or your preferred process manager. A minimal systemd unit:
[Unit]
Description=Livepeer Gateway
After=network.target

[Service]
ExecStart=/usr/local/bin/livepeer \
  -gateway \
  -network arbitrum-one-mainnet \
  -ethUrl https://arb-mainnet.g.alchemy.com/v2/YOUR_KEY \
  -datadir /var/lib/livepeer/gateway \
  -httpAddr 0.0.0.0:8935 \
  -httpIngest \
  -maxPricePerUnit 1000 \
  -v 4
Restart=on-failure
RestartSec=10
User=livepeer
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
Run your gateway behind a TLS-terminating reverse proxy (nginx, Caddy) before exposing it to the internet. The go-livepeer gateway serves plain HTTP. If you are building a public API product on top of the gateway, your load balancer handles TLS, rate limiting, and auth - the gateway only handles Livepeer network routing.

Monitoring your production gateway

Once live, set up monitoring before you need it:
# Check PM deposit balance (alerts you before it runs dry)
./livepeer_cli -network arbitrum-one-mainnet -ethUrl YOUR_RPC -datadir ~/.lpData-gw-prod
# Select: "Get node status" → check "PM Deposit" value

# Watch gateway logs for payment and routing health
journalctl -u livepeer-gateway -f | grep -E "(Ticket|Session|Error|Warn)"

# Query gateway logs via Loki (if available)
curl -G 'https://loki.livepeer.report/loki/api/v1/query_range' \
  --data-urlencode 'query={job="livepeer-gateway"}' \
  --data-urlencode 'limit=100'
Set up an alert on your gateway’s PM deposit balance. If the deposit runs to zero, orchestrators will stop serving your jobs immediately - they check the deposit before accepting work. A deposit of 0.065 ETH will last weeks to months depending on your job volume, but top it up before it reaches zero to avoid scrambling during an incident.
The Monitor and Optimise page covers the full production monitoring stack: Prometheus metrics exporter, Grafana dashboards, Loki log queries, and ETH balance alerting.

Troubleshooting

On-chain mode: confirm the Arbitrum RPC is working (curl -X POST YOUR_RPC -d '{"method":"eth_blockNumber","params":[],"id":1,"jsonrpc":"2.0"}') and the Gateway has a funded PM deposit. Orchestrators only respond to Gateways with valid deposits.Off-chain mode: an -orchAddr or -remoteSignerAddr with discovery support must be provided. There is no automatic discovery in off-chain mode.
Most common causes:
  1. PM deposit is zero or very low - fund it via livepeer_cli
  2. -maxPricePerUnit is lower than any available Orchestrator’s price - raise it temporarily to diagnose
  3. The connected Orchestrators do not support the required capability/model - check the Explorer for Orchestrators advertising the needed capability
Check:
GPU Check
nvidia-smi  # Must show the GPU
docker run --rm --gpus all nvidia/cuda:12.1-base nvidia-smi  # Docker GPU access
If nvidia-smi works but the Orchestrator fails, check that the NVIDIA Container Toolkit is installed: nvidia-ctk --version.
Arbitrum transactions are cheap (fractions of a cent) but require a small ETH gas balance. Ensure the Gateway wallet has at least 0.005 ETH above the intended deposit amount for gas. If using a public RPC endpoint, try a different provider - Alchemy and Infura are generally reliable.
Either job volume is higher than anticipated, or the Orchestrators are pricing above budget. Check the AI Dune Dashboard for current network pricing and compare against -maxPricePerUnit / -maxPricePerCapability settings.
With a working production Gateway, these guides cover each operational area in depth:
Last modified on March 16, 2026