Skip to main content
Livepeer orchestrators use NVIDIA GPUs for video transcoding (NVENC/NVDEC hardware encoders) and AI inference (CUDA cores / Tensor cores). This page covers GPU compatibility, session limits, and driver requirements.

Supported GPU Families

go-livepeer requires NVIDIA GPUs with NVENC and NVDEC support. AMD and Intel GPUs are not supported.

NVENC Session Limits

Consumer NVIDIA GPUs enforce a hard limit on concurrent NVENC encoding sessions. This directly limits how many simultaneous transcoding streams your orchestrator can handle per GPU.

Removing the Session Limit

The community-maintained nvidia-patch removes the NVENC session limit on consumer GPUs. This is widely used by Livepeer orchestrators and pool operators (Titan Node uses this in their worker setup).
# Example (Linux) — always check the repo for current instructions
git clone https://github.com/keylase/nvidia-patch.git
cd nvidia-patch
bash patch.sh
Patching the NVIDIA driver modifies a system binary. This is not officially supported by NVIDIA. After driver updates, you must re-apply the patch. Some cloud providers (AWS, GCP) may not allow driver patching on managed GPU instances.

CUDA and Driver Requirements

Checking Your Versions

# Driver version
nvidia-smi

# CUDA version
nvcc --version

# Docker GPU access
docker run --gpus all nvidia/cuda:12.0-base nvidia-smi

VRAM Requirements by Workload

For detailed per-pipeline VRAM planning, see the Model and Demand Reference.

GPU Selection Guidance

Any supported NVIDIA GPU works. For cost efficiency, an RTX 3060 12GB or RTX 4060 Ti 16GB provides good transcoding throughput at low power draw. Patch the NVENC limit to handle more concurrent sessions.Budget pick: GTX 1660 Super (6 GB) — cheapest entry for transcoding-only.

See Also

Last modified on March 16, 2026