The AI Subnet offers a variety of generative AI pipelines that applications can use to request AI inference jobs on the Livepeer network. Currently, the focus is on Diffusion models developed using Huggingface’s Diffusers library, but future updates will extend support to other model types. This section introduces the available pipelines, the models they support, and provides a basic usage example. For a comprehensive guide on integrating the AI Subnet into your application, refer to the Building on the AI Subnet section.

Models on the AI Subnet

Warm Models

During the Alpha phase of the AI Subnet, Orchestrators are encouraged to keep at least one model per pipeline active on their GPUs (“warm models”). This approach ensures quicker response times for early builders on the Subnet. We’re optimizing GPU model loading/unloading to relax this requirement. The current warm models for each pipeline are listed on their respective pages.

For faster responses with different Diffusion models, request Orchestrators to load it on their GPU via the ai-video channel in Discord Server.

On-Demand Models

Orchestrators can theoretically load any diffusion model from Hugging Face on-demand, optimizing GPU resources by loading models only when needed. However, during the Alpha phase, Orchestrators need to pre-download a model.

If a specific model you wish to use is not listed on the respective pipeline page, submit a feature request on GitHub to get the model verified and added to the list.

Generative AI Pipelines

The subnet currently supports the following generative AI pipelines: