This is way too long
Overview
The Gateway’s dual capability is enabled by its modular architecture, where different managers handle specific workflows while sharing common infrastructure for media ingestion, payment processing, and result delivery. The LivepeerNode struct contains fields for both traditional transcoding (Transcoder, TranscoderManager) and AI processing (AIWorker, AIWorkerManager) The gateway determines the processing type based on the request:- Standard transcoding requests go through the BroadcastSessionsManager
- AI requests go through the AISessionManager with AI-specific authentication and pipeline selection
| Aspect | Video Transcoding | AI Pipelines |
|---|---|---|
| Processing Type | Format/bitrate conversion | AI model inference |
| Session Manager | BroadcastSessionsManager | AISessionManager |
| Payment Model | Per segment | Per pixel processed |
| Protocol | Standard HLS/DASH | Trickle protocol for real-time AI |
| Components | RTMP Server, Playlist Manager | MediaMTX, Trickle Server |
Configuration
To configure a gateway to handle both video transcoding and AI processing, you need to set the appropriate flags and options when starting the livepeer binary. Essential Flags To enable dual setup, configure the gateway with the following flags:| Flag | Description | Required |
|---|---|---|
| -gateway | Run as a gateway node | ✓ |
| -httpIngest | Enable HTTP ingest for AI requests | ✓ |
| -transcodingOptions | Transcoding profiles for video | ✓ |
| -aiServiceRegistry | Enable AI service registry | ✓ |
AI-Specific Configuration
AI flags
Transcoding Configuration
Note, if thetranscodingOptions.json file is not provided, the gateway will use the default transcoding profiles -transcodingOptions=P240p30fps16x9,P360p30fps16x9.
Transcoding flags
-nvidia and NVIDIA drivers are only required for GPU transcoding hosts. A
gateway-only routing setup does not require NVIDIA drivers.Deployment
- Off-Chain Developement Setup
- On-Chain Production Setup
For local development and testing purposes, there is no need to connect to the blockchain payments layer.
You will need to run your own orchestrator node for local development.
Off-Chain Gateway Deployment with dual capabilities
Combined Gateway/Orchestrator AI-Enabled Deployment
For nodes that handle both orchestration and AI processingCombined Gateway/OrchestratorOn-Chain Deployment
Troubleshooting
Common Issues- AI models not loading: Check
-aiModelsDirand model file permissions - GPU transcoding failures: Verify NVIDIA drivers and
-nvidiaconfiguration (only required for GPU transcoding hosts) - Port conflicts: Ensure
-rtmpAddr,-httpAddr, and-cliAddrare available - Memory pressure: Monitor AI model memory usage, adjust
-aiRunnerContainersPerGPU
Example Setup
The box setup for local development demonstrates running a gateway that handles both types of processing.The embedded
box/box.md excerpt is unavailable in this docs branch.
Review the full example setup in the upstream repository:
livepeer/go-livepeer box/box.md.