POST
/
image-to-video

The default Gateway used in this guide is the public Livepeer.cloud Gateway. It is free to use but not intended for production-ready applications. For production-ready applications, consider using the Livepeer Studio Gateway, which requires an API token. Alternatively, you can set up your own Gateway node or partner with one via the ai-video channel on Discord.

Please note that the exact parameters, default values, and responses may vary between models. For more information on model-specific parameters, please refer to the respective model documentation available in the image-to-video pipeline. Not all parameters might be available for a given model.

Authorizations

Authorization
string
headerrequired

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

multipart/form-data
image
file
required

Uploaded image to generate a video from.

model_id
string
default: required

Hugging Face model ID used for video generation.

height
integer
default: 576

The height in pixels of the generated video.

width
integer
default: 1024

The width in pixels of the generated video.

fps
integer
default: 6

The frames per second of the generated video.

motion_bucket_id
integer
default: 127

Used for conditioning the amount of motion for the generation. The higher the number the more motion will be in the video.

noise_aug_strength
number
default: 0.02

Amount of noise added to the conditioning image. Higher values reduce resemblance to the conditioning image and increase motion.

safety_check
boolean
default: true

Perform a safety check to estimate if generated images could be offensive or harmful.

seed
integer

Seed for random number generation.

num_inference_steps
integer
default: 25

Number of denoising steps. More steps usually lead to higher quality images but slower inference. Modulated by strength.

Response

200 - application/json

Response model for image generation.

images
object[]
required

The generated images.