The Livepeer AI network is currently in its Beta stage and is undergoing active development. Running it on the same machine as your main Orchestrator or Gateway node may cause stability issues. Please proceed with caution.

The Livepeer AI network is currently in Beta but is already integrated into the main go-livepeer software. You can run the Livepeer AI software using one of the following methods:

  • Docker (Recommended): The simplest and preferred method.
  • Pre-built Binaries: An alternative if you prefer not to use Docker.

Orchestrator Node Architecture

In the Livepeer AI network, orchestrator operations rely on two primary node types:

  • Orchestrator: Manages and routes incoming jobs to available compute resources.
  • Worker: Performs the actual computation tasks.

The simplest configuration combines both roles on a single machine, utilizing the machine’s GPUs for AI inference tasks, where the orchestrator also functions as a worker (known as a combined AI orchestrator). In this setup, capacity is limited by the available GPUs and is set as runner container count per pipeline/model_id = capacity per pipeline/model_id. For expanded scalability, operators can deploy dedicated (remote) worker nodes that connect to the orchestrator, increasing overall compute capacity. Instructions for setting up remote workers are available on the next page.

Start a Combined AI Orchestrator

Please follow the steps below to start your combined AI orchestrator node.

Verify Combined AI Orchestrator Operation

Once your combined Livepeer AI Orchestrator node is running, verify that the worker is operational by sending an AI inference request directly to the ai-runner container. You can either use the Swagger UI interface or a curl command for this check.

1

Access the Swagger UI

Open your web browser and navigate to http://localhost:8000/docs to access the Swagger UI interface.

2

Initiate an Inference Request

In the Swagger UI, locate the POST /text-to-image endpoint and click the Try it out button. Use the following example JSON payload:

{
    "prompt": "A cool cat on the beach."
}

This request will instruct the AI model to generate an image based on the text in the prompt field.

3

Inspect the Inference Response

If the AI Orchestrator node is functioning correctly, you should receive a response similar to the following:

{
    "images": [
        {
            "url": "data:image/png;base64,iVBORw0KGgoAA...",
            "seed": 2724904334
        }
    ]
}

The url field contains the base64 encoded image generated by the AI model. To convert this image to PNG, use a base64 decoder such as Base64.guru.