Start your AI Orchestrator
The Livepeer AI network is currently in its Beta stage and is undergoing active development. Running it on the same machine as your main Orchestrator or Gateway node may cause stability issues. Please proceed with caution.
The Livepeer AI network is currently in Beta but is already integrated into the main go-livepeer software. You can run the Livepeer AI software using one of the following methods:
- Docker (Recommended): The simplest and preferred method.
- Pre-built Binaries: An alternative if you prefer not to use Docker.
Orchestrator Node Architecture
In the Livepeer AI network, orchestrator operations rely on two primary node types:
- Orchestrator: Manages and routes incoming jobs to available compute resources.
- Worker: Performs the actual computation tasks.
The simplest configuration combines both roles on a single machine, utilizing
the machine’s GPUs for AI inference tasks, where the orchestrator also functions
as a worker (known as a combined AI orchestrator). In this setup, capacity
is limited by the available GPUs and is set as
runner container count per pipeline/model_id = capacity per pipeline/model_id
.
For expanded scalability, operators can deploy dedicated (remote) worker nodes
that connect to the orchestrator, increasing overall compute capacity.
Instructions for setting up remote workers are available on the
next page.
Start a Combined AI Orchestrator
Please follow the steps below to start your combined AI orchestrator node.
Retrieve the Livepeer AI Docker Image
Fetch the latest Livepeer AI Docker image from the Livepeer Docker Hub with the following command:
Fetch the Latest AI Runner Docker Image
The Livepeer AI network employs a containerized workflow for running AI models. Fetch the latest AI Runner image with this command:
Pull Pipeline-Specific Images (optional)
Next, pull any pipeline-specific images if needed. Check the pipelines documentation for more information. For example, to pull the image for the segment-anything-2 pipeline:
Verify the AI Models are Available
The Livepeer AI network leverages pre-trained AI models for inference tasks. Before launching the AI Orchestrator node, verify that the weights of these models are accessible on your machine. For more information, visit the Download AI Models page.
Configure your AI Orchestrator
Confirm that the AI models are correctly set up in the aiModels.json
file in the ~/.lpData/
directory. For guidance on configuring the aiModels.json
file, refer to the AI Models Configuration page. The configuration file should resemble:
Launch an (off-chain) AI Orchestrator
Execute the Livepeer AI Docker image using the following command:
This command launches an off-chain AI Orchestrator node. While most of the commands are similar to those used when operating a Mainnet Transcoding Network Orchestrator node (explained in the go-livepeer CLI reference), there are a few Livepeer AI specific flags:
-aiWorker
: This flag enables the AI Worker functionality.-aiModels
: This flag sets the path to the JSON file that contains the AI models.-aiModelsDir
: This flag indicates the directory where the AI models are stored on the host machine.-aiRunnerImage
: This optional flag specifies which version of the ai-runner image is used. Example:livepeer/ai-runner:0.0.2
Moreover, the --network host
flag facilitates communication between the AI Orchestrator and the AI Runner container.
Lastly, the -nvidia
can be configured in a few ways. Use a comma seperated list of GPUs ie. 0,1
to activate specific GPU slots, each GPU will need it’s own config item in aiModels.json
. Alternativly we can use "all"
to activate all GPUs on the machine with a single model loaded in aiModels.json
(Warning: If different RAM size GPUs are installed it may cause containers to fail if they have less than the required RAM).
aiModelsDir
path should be defined as being on the host machine.Confirm Successful Startup of the AI Orchestrator
If your Livepeer AI Orchestrator node is functioning correctly, you should see the following output:
Check Port Availability
To make your Livepeer AI Orchestrator node accessible from the internet, you need to configure your network settings. Ensure that port 8936
is unblocked on your machine. Additionally, consider setting up port forwarding on your router, allowing the Gateway node to be reachable from the internet.
Retrieve the Livepeer AI Docker Image
Fetch the latest Livepeer AI Docker image from the Livepeer Docker Hub with the following command:
Fetch the Latest AI Runner Docker Image
The Livepeer AI network employs a containerized workflow for running AI models. Fetch the latest AI Runner image with this command:
Pull Pipeline-Specific Images (optional)
Next, pull any pipeline-specific images if needed. Check the pipelines documentation for more information. For example, to pull the image for the segment-anything-2 pipeline:
Verify the AI Models are Available
The Livepeer AI network leverages pre-trained AI models for inference tasks. Before launching the AI Orchestrator node, verify that the weights of these models are accessible on your machine. For more information, visit the Download AI Models page.
Configure your AI Orchestrator
Confirm that the AI models are correctly set up in the aiModels.json
file in the ~/.lpData/
directory. For guidance on configuring the aiModels.json
file, refer to the AI Models Configuration page. The configuration file should resemble:
Launch an (off-chain) AI Orchestrator
Execute the Livepeer AI Docker image using the following command:
This command launches an off-chain AI Orchestrator node. While most of the commands are similar to those used when operating a Mainnet Transcoding Network Orchestrator node (explained in the go-livepeer CLI reference), there are a few Livepeer AI specific flags:
-aiWorker
: This flag enables the AI Worker functionality.-aiModels
: This flag sets the path to the JSON file that contains the AI models.-aiModelsDir
: This flag indicates the directory where the AI models are stored on the host machine.-aiRunnerImage
: This optional flag specifies which version of the ai-runner image is used. Example:livepeer/ai-runner:0.0.2
Moreover, the --network host
flag facilitates communication between the AI Orchestrator and the AI Runner container.
Lastly, the -nvidia
can be configured in a few ways. Use a comma seperated list of GPUs ie. 0,1
to activate specific GPU slots, each GPU will need it’s own config item in aiModels.json
. Alternativly we can use "all"
to activate all GPUs on the machine with a single model loaded in aiModels.json
(Warning: If different RAM size GPUs are installed it may cause containers to fail if they have less than the required RAM).
aiModelsDir
path should be defined as being on the host machine.Confirm Successful Startup of the AI Orchestrator
If your Livepeer AI Orchestrator node is functioning correctly, you should see the following output:
Check Port Availability
To make your Livepeer AI Orchestrator node accessible from the internet, you need to configure your network settings. Ensure that port 8936
is unblocked on your machine. Additionally, consider setting up port forwarding on your router, allowing the Gateway node to be reachable from the internet.
Download the Latest Livepeer AI Binary
Download the latest Livepeer AI binary for your system:
Replace <OS>
and <ARCH>
with your system’s operating system and architecture. For example, for a Linux system with an AMD64 architecture, the command would be:
See the go-livepeer installation guide for more information on the available binaries.
Extract and Configure the Binary
Once downloaded, extract the binary to a directory of your choice.
Fetch the Latest AI Runner Docker Image
The Livepeer AI network employs a containerized workflow for running AI models. Fetch the latest AI Runner image with this command:
Pull Pipeline-Specific Images (optional)
Next, pull any pipeline-specific images if needed. Check the pipelines documentation for more information. For example, to pull the image for the segment-anything-2 pipeline:
Verify the AI Models are Available
The Livepeer AI network leverages pre-trained AI models for inference tasks. Before launching the AI Orchestrator node, verify that the weights of these models are accessible on your machine. For more information, visit the Download AI Models page.
Configure your AI Orchestrator
Confirm that the AI models are correctly set up in the aiModels.json
file in the ~/.lpData/
directory. For guidance on configuring the aiModels.json
file, refer to the AI Models Configuration page. The configuration file should resemble:
Launch an (off-chain) AI Orchestrator
Run the following command to start your Livepeer AI Orchestrator node:
This command launches an off-chain AI Orchestrator node. While most of the commands are similar to those used when operating a Mainnet Transcoding Network Orchestrator node (explained in the go-livepeer CLI reference), there are a few Livepeer AI specific flags:
-aiWorker
: This flag enables the AI Worker functionality.-aiModels
: This flag sets the path to the JSON file that contains the AI models.-aiModelsDir
: This flag indicates the directory where the AI models are stored.-aiRunnerImage
: This optional flag specifies which version of the ai-runner image is used. Example:livepeer/ai-runner:0.0.2
Confirm Successful Startup of the AI Orchestrator
If your Livepeer AI Orchestrator node is functioning correctly, you should see the following output:
Check Port Availability
To make your Livepeer AI Orchestrator node accessible from the internet, you need to configure your network settings. Ensure that port 8936
is unblocked on your machine. Additionally, consider setting up port forwarding on your router, allowing the Gateway node to be reachable from the internet.
If no binaries are available for your system, you can build the master branch of go-livepeer from source by following the instructions in the Livepeer repository or by reaching out to the Livepeer community on Discord.
Verify Combined AI Orchestrator Operation
Once your combined Livepeer AI Orchestrator node is running, verify that the
worker is operational by sending an AI inference request directly to the
ai-runner container. You can
either use the Swagger UI interface or
a curl
command for this check.
Access the Swagger UI
Open your web browser and navigate to http://localhost:8000/docs
to access the Swagger UI interface.
Initiate an Inference Request
In the Swagger UI, locate the POST /text-to-image
endpoint and click the Try it out
button. Use the following example JSON payload:
This request will instruct the AI model to generate an image based on the text in the prompt
field.
Inspect the Inference Response
If the AI Orchestrator node is functioning correctly, you should receive a response similar to the following:
The url
field contains the base64 encoded image generated by the AI model. To convert this image to PNG, use a base64 decoder such as Base64.guru.
Access the Swagger UI
Open your web browser and navigate to http://localhost:8000/docs
to access the Swagger UI interface.
Initiate an Inference Request
In the Swagger UI, locate the POST /text-to-image
endpoint and click the Try it out
button. Use the following example JSON payload:
This request will instruct the AI model to generate an image based on the text in the prompt
field.
Inspect the Inference Response
If the AI Orchestrator node is functioning correctly, you should receive a response similar to the following:
The url
field contains the base64 encoded image generated by the AI model. To convert this image to PNG, use a base64 decoder such as Base64.guru.
Send an Inference Request with curl
Alternatively, you can use the curl
command to test the AI inference capabilities directly. Run the following command, replacing <WORKER_NODE_IP>
with the IP address of your worker node:
This sends a POST request to the text-to-image
endpoint on the worker node with the specified JSON payload.
Inspect the Response
If the AI Worker node is functioning correctly, you should receive a response similar to this:
As with the Swagger UI response, the url
field contains a base64 encoded image that can be decoded into PNG format using a tool like Base64.guru.
Was this page helpful?