Skip to main content
Orchestrators run video transcoding (NVENC/FFmpeg) and/or AI inference on GPUs. Hardware directly affects job selection, reputation, and revenue. Below are minimum, recommended, and AI-optimised guidelines for 2026.

Minimum (development / testing)

Minimum specs
ComponentMinimum
Suitable for testnet, low-volume workloads, and learning.
Recommended production (video-focused)
ComponentRecommended
Optimised for real-time streaming and multi-resolution transcoding.

AI inference

AI workloads are VRAM-bound. Stake does not determine AI job routing; capability and price do.
AI-oriented GPUs
GPUUse case
Also ensure: CUDA 12+, NVIDIA Container Toolkit, good cooling, high IOPS storage for model weights.

Network and ops

  • Latency: <50 ms to major regions helps streaming and gateway selection.
  • Production: Static IP, reverse proxy (e.g. nginx), TLS, firewall rules.
  • Monitoring: Prometheus, Grafana, NVIDIA DCGM exporter; track GPU utilisation, VRAM, segment/job success rate.

Checklist before going live

  • GPU visible via nvidia-smi
  • Docker sees GPU (--gpus all)
  • CUDA functional
  • Ports open (e.g. 8935)
  • Stable Arbitrum RPC
  • Monitoring configured

See also

Last modified on February 18, 2026