Skip to main content

Workload Types

Orchestrators execute four categories of compute workload. Accepted workloads follow the hardware, the loaded pipelines and models, and the node configuration.
An Orchestrator running both video transcoding and AI inference is described as operating in a dual-workload configuration. It is the same Orchestrator process with both pipelines enabled. See for the pipeline internals.

Supported AI Pipelines

Livepeer defines a standard set of AI pipelines that Orchestrators can advertise. Each pipeline maps to a category of inference task and a compatible set of models. An Orchestrator supports any subset of pipelines and models. Each combination of pipeline and model is independently priced and advertised. Gateways discover these via the AIServiceRegistry contract or from the Orchestrator’s capability response during session negotiation.

How Capabilities Are Advertised

When a Gateway wants to route a job, it must find an Orchestrator that can handle it. Orchestrators make themselves discoverable through two mechanisms:

On-chain registration

Orchestrators register their service URI in the ServiceRegistry contract on Arbitrum. AI-capable Orchestrators additionally register with the AIServiceRegistry contract (or use the -aiServiceRegistry flag to connect to the AI subnet). This makes the Orchestrator discoverable to all Gateways that query the registry.

Capability negotiation

When a Gateway establishes a session with an Orchestrator, the Orchestrator returns a capability manifest - the full list of pipelines it supports, the models it has loaded, and its price per unit for each. The Gateway uses this to decide whether to proceed with the session. Advertised capabilities must match the models that are actually loaded and available. When the declared capability set drifts from the live node state, Gateways send jobs the Orchestrator cannot complete.

How Gateways Select Orchestrators

Gateway selection determines whether an Orchestrator attracts work. Gateways rank every session with a multi-factor selection algorithm. A Gateway that sends a job and receives an error or timeout will deprioritise your Orchestrator for subsequent sessions. Sustained availability and accurate capability declaration are the strongest signals for consistent job flow. See for how to configure competitive prices.

Capability Boundaries

Orchestrators execute compute and receive payment tickets. Gateways handle routing, application integration, and business-layer workflows. Use a Gateway when you need to aggregate application demand and route work across multiple Orchestrators. See for that path.
Last modified on March 16, 2026