The AI Subnet is not yet integrated into the main go-livepeer software due to its Alpha status. To enable AI inference capabilities on your Gateway node, please use the ai-video branch of go-livepeer. This branch contains the necessary software for the AI Gateway node. Currently, there are two methods to run the AI Subnet software:

  • Docker: This is the most straightforward and recommended method to run the AI Gateway node.
  • Pre-built Binaries: If you prefer not to use Docker, pre-built binaries are available.

Start the AI Gateway

Please follow the steps below to start your AI Subnet Gateway node:


Retrieve the AI Subnet Docker Image

Fetch the latest AI Subnet Docker image from the Livepeer Docker Hub with the following command:

docker pull livepeer/go-livepeer:ai-video

Launch an (Offchain) AI Gateway

Execute the AI Subnet Docker image using the following command:

docker run \
    --name livepeer_ai_gateway \
    -v ~/.lpData2/:/root/.lpData2 \
    -p 8937:8937 \
    --network host \
    livepeer/go-livepeer:ai-video \
    -datadir ~/.lpData2 \
    -gateway \
    -orchAddr <ORCH_LIST> \
    -httpAddr \
    -v 6 \

This launches an offchain AI Gateway node. The flags are similar to those used for a Mainnet Transcoding Network Gateway node. For more information, see the go-livepeer CLI reference.


Confirm Successful Startup of the AI Gateway

If your AI Subnet Gateway node is functioning correctly, you should see the following output:

I0501 11:07:47.609839       1 mediaserver.go:201] Transcode Job Type: [{P240p30fps16x9 600k 30 0 426x240 16:9 0 0 0s 0 0 0 0} {P360p30fps16x9 1200k 30 0 640x360 16:9 0 0 0s 0 0 0 0}]
I0501 11:07:47.609917       1 mediaserver.go:226] HTTP Server listening on
I0501 11:07:47.609963       1 lpms.go:92] LPMS Server listening on rtmp://

Check Port Availability

To make your AI Subnet Gateway node accessible from the internet, you need to configure your network settings. Ensure that port 8937 is unblocked on your machine. Additionally, consider setting up port forwarding on your router, which will allow the Gateway node to be reachable from the internet.

Confirm the AI Gateway is Operational

After launching the AI Subnet Gateway node, verify its operation by sending an AI inference request directly to it. This requires an active off-chain AI Orchestrator node. For guidance on setting up an AI Orchestrator node, refer to the AI Orchestrator Setup Guide.


Launch an AI Orchestrator

Start an AI Orchestrator node on port 8936 following the AI Orchestrator Setup Guide. Ensure it has loaded the desired model for inference (e.g., “ByteDance/SDXL-Lightning”).


Link Gateway to AI Orchestrator

To connect your Gateway node to the AI Orchestrator node, specify the Orchestrator’s address when launching the Gateway node. Replace <ORCH_LIST> with the Orchestrator’s address, like so:


Submit an Inference Request

Refer to the AI API reference to understand how to submit an inference request to the Gateway node. For instance, to generate an image from text, use the following curl command:

curl -X POST "" \
    -H "Content-Type: application/json" \
    -d '{
        "prompt":"A cool cat on the beach",
        "width": 1024,
        "height": 1024

Inspect the Response

If the Gateway node is functioning correctly, you should receive a response similar to the following:

    "images": [
        "seed": 2562822894,
        "url": ""

Consult the Text-to-image Pipeline Documentation for more details on interpreting the response.