Workload Types
Orchestrators execute four categories of compute workload. Accepted workloads follow the hardware, the loaded pipelines and models, and the node configuration.An Orchestrator running both video transcoding and AI inference is described as operating in a
dual-workload configuration. It is the same Orchestrator process
with both pipelines enabled. See for the pipeline internals.
Supported AI Pipelines
Livepeer defines a standard set of AI pipelines that Orchestrators can advertise. Each pipeline maps to a category of inference task and a compatible set of models. An Orchestrator supports any subset of pipelines and models. Each combination of pipeline and model is independently priced and advertised. Gateways discover these via the AIServiceRegistry contract or from the Orchestrator’s capability response during session negotiation.How Capabilities Are Advertised
When a Gateway wants to route a job, it must find an Orchestrator that can handle it. Orchestrators make themselves discoverable through two mechanisms:On-chain registration
Orchestrators register their service URI in the ServiceRegistry contract on Arbitrum. AI-capable Orchestrators additionally register with the AIServiceRegistry contract (or use the-aiServiceRegistry
flag to connect to the AI subnet). This makes the Orchestrator discoverable to all Gateways that query
the registry.