Text-to-Image
Overview
The text-to-image
pipeline of the Livepeer AI network allows you to generate
high-quality images from text descriptions. This pipeline is powered by the
latest diffusion models in the HuggingFace
text-to-image
pipeline.
Models
Warm Models
The current warm model requested for the text-to-image
pipeline is:
- SG161222/RealVisXL_V4.0_Lightning:
A streamlined version of RealVisXL_V4.0, designed for faster inference while still aiming for photorealism.
Furthermore, several Orchestrators are currently maintaining the following model in a ready state:
- ByteDance/SDXL-Lightning:
A high-performance diffusion model developed by ByteDance.
For faster responses with different
text-to-video
diffusion models, ask Orchestrators to load it on their GPU via the ai-video
channel in Discord Server.
On-Demand Models
The following models have been tested and verified for the text-to-image
pipeline:
If a specific model you wish to use is not listed, please submit a feature request on GitHub to get the model verified and added to the list.
Basic Usage Instructions
For a detailed understanding of the text-to-image
endpoint and to experiment
with the API, see the Livepeer AI API
Reference. For examples of effective
prompts, visit PromptHero.
To generate an image with the text-to-image
pipeline, send a POST
request to
the Gateway’s text-to-image
API endpoint:
curl -X POST "https://<GATEWAY_IP>/text-to-image" \
-H "Content-Type: application/json" \
-d '{
"model_id":"ByteDance/SDXL-Lightning",
"prompt":"A cool cat on the beach",
"width": 1024,
"height": 1024
}'
In this command:
<GATEWAY_IP>
should be replaced with your AI Gateway’s IP address.model_id
is the diffusion model for image generation.prompt
is the text description for the image.
For additional optional parameters, refer to the Livepeer AI API Reference.
After execution, the Orchestrator processes the request and returns the response to the Gateway:
{
"images": [
{
"nsfw": false,
"seed": 2562822894,
"url": "https://<GATEWAY_IP>/stream/d0fc1fc6/8fdf5a94.png"
}
]
}
The url
in the response is the URL of the generated image. Download the image
with:
curl -O "https://<GATEWAY_IP>/stream/d0fc1fc6/8fdf5a94.png"
Applying LoRa Models
To apply LoRa filters to an image, include the loras
field in your request:
curl -X POST "https://<GATEWAY_IP>/text-to-image" \
-H "Content-Type: application/json" \
-d '{
"model_id":"stabilityai/stable-diffusion-xl-base-1.0",
"prompt":"A cool cat on the beach",
"width": 1024,
"height": 1024,
"loras": "{ \"latent-consistency/lcm-lora-sdxl\": 1.0, \"nerijs/pixel-art-xl\": 1.2}"
}'
You can find a list of available LoRa models for various models on lora-studio.
Orchestrator Configuration
To configure your Orchestrator to serve the text-to-image
pipeline, refer to
the Orchestrator Configuration guide.
System Requirements
The following system requirements are recommended for optimal performance:
- NVIDIA GPU with at least 24GB of VRAM.
API Reference
API Reference
Explore the text-to-image
endpoint and experiment with the API in the
Livepeer AI API Reference.
Was this page helpful?