Get Started
If you’re interested in joining Livepeer AI as an Orchestrator to perform AI inference on the Livepeer AI Network and earn fees, this guide is for you. It is tailored to help you set up an AI Orchestrator node, building upon the Orchestrator Setup Guide for the Mainnet Transcoding Network, but with additional steps for AI operations.
For a step-by-step walkthrough, refer to the subpages of this guide. For a more general understanding of Orchestrator operations, you can consult the Orchestrator Setup Guide for the Mainnet Transcoding Network, as the AI guide extends upon the foundational knowledge provided there.
Orchestrator Setup Guide
Visit the Orchestrator Setup Guide for detailed instructions on setting up an Orchestrator node on the Mainnet Transcoding Network.
Prerequisites
Before setting up your AI Orchestrator node, ensure you meet the following requirements:
- You are operating a Top 100 Mainnet Orchestrator on the Mainnet Transcoding Network
- High VRAM GPUs Required: Livepeer AI requires GPUs with at least 16GB of VRAM for most tasks. For optimal performance and higher job selection chances, 30/40 series GPUs or comparable models are recommended. Exact requirements are in the AI Pipelines documentation.
- Docker is installed on your machine
- CUDA 12.4 is installed on your machine
- Nvidia Container Toolkit is installed on your machine
- You are using a Linux system (Support for Windows and macOS is coming soon)
- You have Python 3.10 or higher installed (for downloading and managing AI models)
Was this page helpful?