Start your AI Gateway

The Livepeer AI network is currently in Beta but is already integrated into the main go-livepeer software. You can run the Livepeer AI software using one of the following methods:

  • Docker (Recommended): The simplest and preferred method.
  • Pre-built Binaries: An alternative if you prefer not to use Docker.

Start the AI Gateway

Follow the steps below to start your Livepeer AI Gateway node:

1

Retrieve the Livepeer AI Docker Image

Fetch the latest Livepeer AI Docker image with the following command:

docker pull livepeer/go-livepeer:master
2

Launch an Off-chain AI Gateway

Run the Docker container for your AI Gateway node:

docker run \
    --name livepeer_ai_gateway \
    -v ~/.lpData2/:/root/.lpData2 \
    -p 8937:8937 \
    --network host \
    livepeer/go-livepeer:master \
    -datadir ~/.lpData2 \
    -gateway \
    -orchAddr <ORCH_LIST> \
    -httpAddr 0.0.0.0:8937 \
    -v 6 \
    -httpIngest

This command launches an off-chain AI Gateway node. The flags are similar to those used for a Mainnet Transcoding Network Gateway. See the go-livepeer CLI reference for details.

3

Confirm Successful Startup

Upon successful startup, you should see output similar to:

I0501 11:07:47.609839       1 mediaserver.go:201] Transcode Job Type: [{P240p30fps16x9 600k 30 0 426x240 16:9 0 0 0s 0 0 0 0} {P360p30fps16x9 1200k 30 0 640x360 16:9 0 0 0s 0 0 0 0}]
I0501 11:07:47.609917       1 mediaserver.go:226] HTTP Server listening on http://0.0.0.0:8937
I0501 11:07:47.609963       1 lpms.go:92] LPMS Server listening on rtmp://127.0.0.1:1935
4

Check Port Availability

Ensure that port 8937 is open and accessible, and configure your router for port forwarding if necessary to make the Gateway accessible from the internet.

Confirm the AI Gateway is Operational

After launching your Livepeer AI Gateway node, verify its operation by sending an AI inference request. Ensure that the Gateway is connected to an active off-chain AI Orchestrator node. For instructions on setting up an AI Orchestrator, refer to the AI Orchestrator Setup Guide.

1

Launch an AI Orchestrator

Start an AI Orchestrator node on port 8936 by following the AI Orchestrator Setup Guide. Ensure that the Orchestrator has loaded the necessary model (e.g., “ByteDance/SDXL-Lightning”).

2

Link Gateway to AI Orchestrator

Specify the Orchestrator’s address when launching the Gateway. Replace <ORCH_LIST> with the Orchestrator’s address:

-orchAddr 0.0.0.0:8936
3

Submit an Inference Request

To submit an AI inference request, refer to the AI API reference. For example, to generate an image from text, use the following curl command:

curl -X POST "http://0.0.0.0:8937/text-to-image" \
    -H "Content-Type: application/json" \
    -d '{
        "model_id":"ByteDance/SDXL-Lightning",
        "prompt":"A cool cat on the beach",
        "width": 1024,
        "height": 1024
    }'
4

Inspect the Response

If the request is successful, you should see a response like this:

{
    "images": [
        {
            "seed": 2562822894,
            "url": "https://0.0.0.0:8937/stream/d0fc1fc6/8fdf5a94.png"
        }
    ]
}

Refer to the Text-to-image Pipeline Documentation for more information.