Attach Remote AI Workers
Introduction
The AI Worker is a crucial component of the Livepeer AI network, responsible for performing AI inference tasks. It can be run as a separate process on compute machines distinct from the Orchestrator or combined with the Orchestrator on the same machine.
Key Setup Considerations
- Startup Configuration: If you decide to use separate AI Workers, this must be selected at the Orchestrator’s startup. A combined Orchestrator cannot simultaneously support remote AI Workers.
- Shared Configuration File: Both the Orchestrator and AI Workers use the
aiModels.json
file (see Configuring AI Models).- The Orchestrator uses
aiModels.json
to set model pricing. - The AI Worker uses it to manage the runner containers for each model.
- The Orchestrator uses
Remote AI Worker Setup
When using experimental external runner containers, ensure they connect to the AI Worker and not directly to the Orchestrator.
In a split configuration, the Orchestrator manages multiple AI Workers and allocates tasks based on the connected workers’ capacity. Worker capacity is determined by the following formula:
The Orchestrator’s capacity is the sum of the capacities of all connected AI Workers. This setup enables flexible scaling of compute resources by adding or removing AI Workers as needed.
Launch Commands for Remote AI Worker
Below are the launch commands for both the Orchestrator and AI Worker nodes.
For the full Orchestrator launch command, see Start Your AI Orchestrator.
Configuration Files (aiModels.json
)
The aiModels.json
file configures AI model parameters separately for the
Orchestrator and the AI Worker, with each configuration tailored to the specific
needs of that node.
For detailed guidance on configuring aiModels.json
with advanced model
settings, see Configuring AI Models.
Verifying Remote AI Worker Operation
After starting your remote AI Worker node, you can verify it is operational by following the same inference test instructions used for the Orchestrator, as described in the Orchestrator Confirmation Section.
When accessing the AI Runner from a separate machine, replace localhost
with
the Worker Node’s IP address in the inference test instructions.
Was this page helpful?