Start your AI Gateway
The Livepeer AI network is currently in Beta but is already integrated into
the main go-livepeer software. You
can run the Livepeer AI software using one of the following methods:
- Docker (Recommended): The simplest and preferred method.
- Pre-built Binaries: An alternative if you prefer not to use Docker.
Start the AI Gateway
Follow the steps below to start your Livepeer AI Gateway node:
Use Docker (Recommended)
Use Binaries
Retrieve the Livepeer AI Docker Image
Launch an Off-chain AI Gateway
Run the Docker container for your AI Gateway node:docker run \
--name livepeer_ai_gateway \
-v ~/.lpData2/:/root/.lpData2 \
-p 8937:8937 \
--network host \
livepeer/go-livepeer:master \
-datadir ~/.lpData2 \
-gateway \
-orchAddr <orchestrator list> \
-httpAddr 0.0.0.0:8937 \
-v 6 \
-httpIngest
This command launches an off-chain AI Gateway node. The flags are similar to those used for a Mainnet Transcoding Network Gateway. See the go-livepeer CLI reference for details. Confirm Successful Startup
Upon successful startup, you should see output similar to:I0501 11:07:47.609839 1 mediaserver.go:201] Transcode Job Type: [{P240p30fps16x9 600k 30 0 426x240 16:9 0 0 0s 0 0 0 0} {P360p30fps16x9 1200k 30 0 640x360 16:9 0 0 0s 0 0 0 0}]
I0501 11:07:47.609917 1 mediaserver.go:226] HTTP Server listening on http://0.0.0.0:8937
I0501 11:07:47.609963 1 lpms.go:92] LPMS Server listening on rtmp://127.0.0.1:1935
Check Port Availability
Ensure that port 8937 is open and accessible, and configure your router for port forwarding if necessary to make the Gateway accessible from the internet.
Download the Latest Livepeer AI Binary
Download the latest Livepeer AI binary for your system:wget https://build.livepeer.live/go-livepeer/livepeer-<OS>-<ARCH>.tar.gz
Replace <OS> and <ARCH> with your operating system and architecture (e.g., linux-amd64 for Linux AMD64). For more details, see the go-livepeer installation guide.The Windows and MacOS (amd64) binaries of Livepeer AI are not available yet.
Extract and Configure the Binary
Once downloaded, extract the binary to a directory of your choice.
Launch an Off-chain AI Gateway
Start the AI Gateway node with the following command:./livepeer \
-datadir ~/.lpData2 \
-gateway \
-orchAddr <orchestrator list> \
-httpAddr 0.0.0.0:8937 \
-v 6 \
-httpIngest
This command launches an off-chain AI Gateway node. Refer to the go-livepeer CLI reference for more details on the flags. Confirm Successful Startup
Check the terminal for the following output to confirm successful startup:I0501 11:07:47.609839 1 mediaserver.go:201] Transcode Job Type: [{P240p30fps16x9 600k 30 0 426x240 16:9 0 0 0s 0 0 0 0} {P360p30fps16x9 1200k 30 0 640x360 16:9 0 0 0s 0 0 0 0}]
I0501 11:07:47.609917 1 mediaserver.go:226] HTTP Server listening on http://0.0.0.0:8937
I0501 11:07:47.609963 1 lpms.go:92] LPMS Server listening on rtmp://127.0.0.1:1935
Check Port Availability
Ensure that port 8937 is open and accessible, and configure port forwarding if needed.
Confirm the AI Gateway is Operational
After launching your Livepeer AI Gateway node, verify its operation by sending
an AI inference request. Ensure that the Gateway is connected to an active
off-chain AI Orchestrator node. For instructions on setting up an AI
Orchestrator, refer to the
AI Orchestrator Setup Guide.
Launch an AI Orchestrator
Start an AI Orchestrator node on port 8936 by following the AI Orchestrator Setup Guide. Ensure that the Orchestrator has loaded the necessary model (e.g., “ByteDance/SDXL-Lightning”). Link Gateway to AI Orchestrator
Specify the Orchestrator’s address when launching the Gateway. Replace <ORCH_LIST> with the Orchestrator’s address: Submit an Inference Request
To submit an AI inference request, refer to the AI API reference. For example, to generate an image from text, use the following curl command:curl -X POST "http://0.0.0.0:8937/text-to-image" \
-H "Content-Type: application/json" \
-d '{
"model_id":"ByteDance/SDXL-Lightning",
"prompt":"A cool cat on the beach",
"width": 1024,
"height": 1024
}'
Inspect the Response
If the request is successful, you should see a response like this:{
"images": [
{
"seed": 2562822894,
"url": "https://0.0.0.0:8937/stream/d0fc1fc6/8fdf5a94.png"
}
]
}
Refer to the Text-to-image Pipeline Documentation for more information. Last modified on February 18, 2026