Start your AI Gateway

The Livepeer AI network is currently in Beta but is already integrated into the main go-livepeer software. You can run the Livepeer AI software using one of the following methods:

  • Docker (Recommended): The simplest and preferred method.
  • Pre-built Binaries: An alternative if you prefer not to use Docker.

Start the AI Gateway

Follow the steps below to start your Livepeer AI Gateway node:

Confirm the AI Gateway is Operational

After launching your Livepeer AI Gateway node, verify its operation by sending an AI inference request. Ensure that the Gateway is connected to an active off-chain AI Orchestrator node. For instructions on setting up an AI Orchestrator, refer to the AI Orchestrator Setup Guide.

1

Launch an AI Orchestrator

Start an AI Orchestrator node on port 8936 by following the AI Orchestrator Setup Guide. Ensure that the Orchestrator has loaded the necessary model (e.g., “ByteDance/SDXL-Lightning”).

2

Link Gateway to AI Orchestrator

Specify the Orchestrator’s address when launching the Gateway. Replace <ORCH_LIST> with the Orchestrator’s address:

-orchAddr 0.0.0.0:8936
3

Submit an Inference Request

To submit an AI inference request, refer to the AI API reference. For example, to generate an image from text, use the following curl command:

curl -X POST "http://0.0.0.0:8937/text-to-image" \
    -H "Content-Type: application/json" \
    -d '{
        "model_id":"ByteDance/SDXL-Lightning",
        "prompt":"A cool cat on the beach",
        "width": 1024,
        "height": 1024
    }'
4

Inspect the Response

If the request is successful, you should see a response like this:

{
    "images": [
        {
            "seed": 2562822894,
            "url": "https://0.0.0.0:8937/stream/d0fc1fc6/8fdf5a94.png"
        }
    ]
}

Refer to the Text-to-image Pipeline Documentation for more information.