segment-anything-2
pipeline provides direct access to the
Segment Anything 2 pipeline developed by
Meta AI Research. In its current version, it
supports only image segmentation, enabling it to segment any object in an image.
Future versions will also support direct video input, allowing the object to be
consistently tracked across all frames of a video in real-time. This advancement
will unlock new possibilities for video editing and enhance experiences in mixed
reality. The pipeline is powered by the latest diffusion models from
HuggingFace’s
facebook/sam2-hiera-large.
segment-anything-2
pipeline is:
ai-video
channel in Discord Server.segment-anything-2
pipeline:
Tested and Verified Diffusion Models
segment-anything-2
endpoint and to
experiment with the API, see the Livepeer AI API
Reference.segment-anything-2
pipeline, send a POST
request to the Gateway’s segment-anything-2
API endpoint:
<GATEWAY_IP>
should be replaced with your AI Gateway’s IP address.model_id
is the diffusion model for image segmentation.point_coords
field holds the coordinates of the points to be segmented.point_labels
field holds the labels for the points to be segmented.image
field holds the absolute path to the image file to be
transformed.segment-anything-2
pipeline, refer
to the Orchestrator Configuration guide.
segment-anything-2
pipeline is based on competitor
pricing. However, we strongly encourage orchestrators to set their own pricing
based on their costs and requirements. Setting a competitive price will help
attract more jobs, as Gateways can set their maximum price for a job. The
current recommended pricing for this pipeline is 3.22e-11 USD
per input
pixel (height * width
).
segment-anything-2
pipeline, you must use a pipeline specific AI
Runner container. Pull the required container from
Docker Hub
using the following command:
segment-anything-2
endpoint and experiment with the API in
Livepeer AI API Reference.