The AI Subnet is currently in its Alpha stage and is undergoing active development. Running it on the same machine as your main Orchestrator or Gateway node may cause stability issues. Please proceed with caution.

The AI Subnet is not yet integrated into the main go-livepeer software due to its Alpha status. To equip your Orchestrator node with AI inference capabilities, please use the ai-video branch of go-livepeer. This branch contains the necessary software for the AI Orchestrator. Currently, there are two methods to run the AI Subnet software:

  • Docker: This is the most straightforward and recommended method to run the AI Orchestrator node.
  • Pre-built Binaries: Pre-built binaries are available if you prefer not to use Docker.

Start the AI Orchestrator

Please follow the steps below to start your AI Subnet Orchestrator node:

1

Retrieve the AI Subnet Docker Image

Fetch the latest AI Subnet Docker image from the Livepeer Docker Hub with the following command:

docker pull livepeer/go-livepeer:ai-video
2

Fetch the Latest AI Runner Docker Image

The Livepeer AI Subnet employs a containerized workflow for running AI models. Fetch the latest AI Runner image with this command:

docker pull livepeer/ai-runner:latest
3

Verify the AI Models are Available

The AI Subnet leverages pre-trained AI models for inference tasks. Before launching the AI Orchestrator node, verify that the weights of these models are accessible on your machine. For more information, visit the Download AI Models page.

4

Configure your AI Orchestrator

Confirm that the AI models are correctly set up in the aiModels.json file in the ~/.lpData/ directory. For guidance on configuring the aiModels.json file, refer to the AI Models Configuration page. The configuration file should resemble:

[
    {
        "pipeline": "text-to-image",
        "model_id": "ByteDance/SDXL-Lightning",
        "price_per_unit": 4768371,
        "warm": true,
    }
]
5

Launch an (Offchain) AI Orchestrator

Execute the AI Subnet Docker image using the following command:

docker run \
    --name livepeer_ai_orchestrator \
    -v ~/.lpData/:/root/.lpData/ \
    -v /var/run/docker.sock:/var/run/docker.sock \
    --network host \
    --gpus all \
    livepeer/go-livepeer:ai-video \
    -orchestrator \
    -transcoder \
    -serviceAddr 0.0.0.0:8936 \
    -v 6 \
    -nvidia "all" \
    -aiWorker \
    -aiModels /root/.lpData/aiModels.json \
    -aiModelsDir ~/.lpData/models

This command launches an offchain AI Orchestrator node. While most of the commands are akin to those used when operating a Mainnet Transcoding Network Orchestrator node (explained in the go-livepeer CLI reference), there are a few AI Subnet specific flags:

  • -aiWorker: This flag enables the AI Worker functionality.
  • -aiModels: This flag sets the path to the JSON file that contains the AI models.
  • -aiModelsDir: This flag indicates the directory where the AI models are stored on the host machine.
  • -aiRunnerImage: This optional flag specifies which version of the ai-runner image is used. Example: livepeer/ai-runner:0.0.2

Moreover, the --network host flag facilitates communication between the AI Orchestrator and the AI Runner container.

Please note that since we use docker-out-of-docker, the aiModelsDir path should be defined as being on the host machine.
6

Confirm Successful Startup of the AI Orchestrator

If your AI Subnet Orchestrator node is functioning correctly, you should see the following output:

2024/05/01 09:01:39 INFO Starting managed container gpu=0 name=text-to-image_ByteDance_SDXL-Lightning modelID=ByteDance/SDXL-Lightning
...
I0405 22:03:17.427058 2655655 rpc.go:301] Connecting RPC to uri=https://0.0.0.0:8936
I0405 22:03:17.430371 2655655 rpc.go:254] Received Ping request
7

Check Port Availability

To make your AI Subnet Orchestrator node accessible from the internet, you need to configure your network settings. Ensure that port 8936 is unblocked on your machine. Additionally, consider setting up port forwarding on your router, allowing the Gateway node to be reachable from the internet.

Confirm the AI Orchestrator is Operational

Once the AI Subnet Orchestrator node is up and running, validate its operation by sending an AI inference request directly to the ai-runner container. The most straightforward way to do this is through the swagger UI interface, accessible at http://localhost:8000/docs.

Swagger UI interface

1

Access the Swagger UI

Navigate to http://localhost:8000/docs in your web browser to open the Swagger UI interface.

2

Initiate an Inference Request

Initiate an inference request to the POST /text-to-image endpoint by clicking the Try it out button. Use the following example JSON payload:

{
    "prompt": "A cool cat on the beach."
}

This request will instruct the AI model to generate an image based on the text in the prompt field.

3

Inspect the Inference Response

If the AI Orchestrator node is functioning correctly, you should receive a response similar to the following:

{
    "images": [
        {
            "url": "data:image/png;base64,iVBORw0KGgoAA...",
            "seed": 2724904334
        }
    ]
}

The url field contains the base64 encoded image generated by the AI model. To convert this image to a png, use a base64 decoder such as Base64.guru.