Image To Video
Generate a video from a provided image.
The default Gateway used in this guide is the public
Livepeer.cloud Gateway. It is free to use but
not intended for production-ready applications. For production-ready
applications, consider using the Livepeer Studio
Gateway, which requires an API token. Alternatively, you can set up your own
Gateway node or partner with one via the ai-video
channel on
Discord.
Please note that the exact parameters, default values, and responses may vary between models. For more information on model-specific parameters, please refer to the respective model documentation available in the image-to-video pipeline. Not all parameters might be available for a given model.
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
Uploaded image to generate a video from.
Hugging Face model ID used for video generation.
The height in pixels of the generated video.
The width in pixels of the generated video.
The frames per second of the generated video.
Used for conditioning the amount of motion for the generation. The higher the number the more motion will be in the video.
Amount of noise added to the conditioning image. Higher values reduce resemblance to the conditioning image and increase motion.
Perform a safety check to estimate if generated images could be offensive or harmful.
Seed for random number generation.
Number of denoising steps. More steps usually lead to higher quality images but slower inference. Modulated by strength.
Response
Response model for image generation.
The generated images.
Was this page helpful?