Overview

The image-to-image pipeline of the Livepeer AI network enables advanced image manipulations including style transfer, image enhancement, and more. This pipeline leverages cutting-edge diffusion models from the HuggingFace image-to-image pipeline.

Models

Warm Models

The current warm model requested for the image-to-image pipeline is:

  • timbrooks/instruct-pix2pix: A powerful diffusion model that edits images to a high-quality standard based on human-written instructions.

For faster responses with different image-to-image diffusion models, ask Orchestrators to load it on their GPU via the ai-video channel in Discord Server.

On-Demand Models

The following models have been tested and verified for the image-to-image pipeline:

If a specific model you wish to use is not listed, please submit a feature request on GitHub to get the model verified and added to the list.

Basic Usage Instructions

For a detailed understanding of the image-to-image endpoint and to experiment with the API, see the Livepeer AI API Reference.

To generate an image with the image-to-image pipeline, send a POST request to the Gateway’s image-to-image API endpoint:

curl -X POST https://<GATEWAY_IP>/image-to-image \
    -F model_id="timbrooks/instruct-pix2pix" \
    -F image=@<PATH_TO_IMAGE>/cool-cat.png \
    -F prompt="a hat"

In this command:

  • <GATEWAY_IP> should be replaced with your AI Gateway’s IP address.
  • model_id is the diffusion model for image generation.
  • The image field holds the absolute path to the image file to be transformed.
  • prompt is the text description for the image.

For additional optional parameters, refer to the Livepeer AI API Reference.

After execution, the Orchestrator processes the request and returns the response to the Gateway:

{
  "images": [
    {
      "nsfw": false,
      "seed": 3197613440,
      "url": "https://<GATEWAY_IP>/stream/dd5ad78d/7adde483.png"
    }
  ]
}

The url in the response is the URL of the generated image. Download the image with:

curl -O "https://<STORAGE_ENDPOINT>/stream/dd5ad78d/7adde483.png"

Applying LoRa Models

To apply LoRa filters to an image, include the loras field in your request:

curl -X POST https://<GATEWAY_IP>/image-to-image \
    -F model_id="ByteDance/SDXL-Lightning" \
    -F image=@<PATH_TO_IMAGE>/cool-cat.png \
    -F prompt="a hat" \
    -F loras='{ "nerijs/pixel-art-xl": 1.2 }'

You can find a list of available LoRa models for various models on lora-studio.

Orchestrator Configuration

To configure your Orchestrator to serve the image-to-image pipeline, refer to the Orchestrator Configuration guide.

System Requirements

The following system requirements are recommended for optimal performance:

We are planning to simplify the pricing in the future so orchestrators can set one AI price per compute unit and have the system automatically scale based on the model’s compute requirements.

The pricing for the image-to-image pipeline is based on competitor pricing. However, we strongly encourage orchestrators to set their own pricing based on their costs and requirements. Setting a competitive price will help attract more jobs, as Gateways can set their maximum price for a job. The current recommended pricing for this pipeline is 1.9073484e-08 USD per input pixel (height * width * output images).

API Reference

API Reference

Explore the image-to-image endpoint and experiment with the API in the Livepeer AI API Reference.