Choose your test
Video Transcoding Test
20-30 min. Verify GPU transcoding works. Runs an orchestrator and gateway on the same machine, sends a test stream via ffmpeg, confirms HLS output.
AI Inference Test
35-65 min. Verify AI inference works. Runs an orchestrator and AI runner with one warm model, sends a test prompt, confirms an image is returned. Requires 24 GB VRAM for diffusion; 8 GB for LLM alternative.
What you need
- NVIDIA GPU — any model for the video test; 24 GB VRAM for AI diffusion (or 8-16 GB for the LLM alternative)
- Docker Engine — with NVIDIA Container Toolkit for GPU passthrough
- ffmpeg — for the video test only (
ffmpeg -versionto check) - Linux — required for the AI test (video test also works on WSL2 or macOS Docker)
After the quickstart
Setup Guide
Configure for production: on-chain, staking, reward calling. The setup section takes a verified GPU to an earning node.
Operator Rationale
Still evaluating? Review the cost-benefit analysis before committing to the full setup.
Join a Pool
The fastest path to earning without full setup - contribute GPU capacity to an existing operator pool.
Workload Options
See which workloads earn and which fit your hardware before committing.