Overview

The AI Subnet’s upscale pipeline provides advanced image upscaling. Powered by the latest diffusion models in HuggingFace’s super-resolution pipeline, it enhances the resolution of input images by a factor of 4.

Models

Warm Models

The current warm model requested for the upscale pipeline is:

For faster responses with different upscale compatible diffusion models, ask Orchestrators to load it on their GPU via the ai-video channel in Discord Server.

On-Demand Models

The following models have been tested and verified for the upscale pipeline:

If a specific model you wish to use is not listed, please submit a feature request on GitHub to get the model verified and added to the list.

Basic Usage Instructions

For a detailed understanding of the upscale endpoint and to experiment with the API, see the AI Subnet API Reference.

To generate an image with the upscale pipeline, send a POST request to the Gateway’s upscale API endpoint:

curl -X POST https://<gateway-ip>/upscale \
    -F model_id="stabilityai/stable-diffusion-x4-upscaler" \
    -F image=@<PATH_TO_IMAGE>/low_res_cat.png \
    -F prompt="A white cat"

In this command:

  • <gateway-ip> should be replaced with your AI Gateway’s IP address.
  • model_id is the diffusion model for image generation.
  • The image field holds the absolute path to the image file to be upscaled.
  • prompt is a descriptive text that provides context about the content of the image.

For additional optional parameters, refer to the AI Subnet API Reference.

After execution, the Orchestrator processes the request and returns the response to the Gateway:

{
  "images": [
    {
      "nsfw": false,
      "seed": 3197613440,
      "url": "https://<gateway-ip>/stream/dd5ad78d/7adde483.png"
    }
  ]
}

The url in the response is the URL of the generated image. Download the image with:

curl -O "https://<STORAGE_ENDPOINT>/stream/dd5ad78d/7adde483.png"

API Reference

API Reference

Explore the upscale endpoint and experiment with the API in the AI Subnet API Reference.