aiModels.json
file, typically
located in the ~/.lpData
directory. Below is an example configuration showing
currently recommended models and their respective prices.
text-to-image
).image-to-video
).true
, the model is preloaded on the GPU to decrease runtime.SFAST
flag enables the Stable
Fast optimization framework,
potentially boosting inference speed by up to 25% with no quality
loss. Cannot be used in conjunction with DEEPCACHE
.DEEPCACHE
for Lightning/Turbo models since they’re already
optimized. Due to known
limitations, it does not
provide speed benefits and may significantly lower image quality.DEEPCACHE
flag enables the
DeepCache optimization framework,
which can enhance inference speed by up to 50% with minimal quality
loss. The speedup becomes more pronounced as the number of inference steps
increases. Cannot be used simultaneously with SFAST
.url
.url
, capacity
and token
fields in the
model configuration. The only requirement is that the url
specified responds as expected to the AI Worker same
as the managed containers would respond (including http error codes). As long as the container management software
acts as a pass through to the model container you can use any container management software to implement the custom
management of the runner containers including Kubernetes, Podman,
Docker Swarm, Nomad, or custom scripts to
manage container lifecycles based on request volume
url
set will be used to confirm a model container is running at startup of the AI Worker using the /health
endpoint.
Inference requests will be forwarded to the url
same as they are to the managed containers after startup.capacity
should be set to the maximum amount of requests that can be processed concurrently for the pipeline/model id (default is 1).
If auto scaling containers, take care that the startup time is fast if setting warm: true
because slow response time will
negatively impact your selection by Gateways for future requests.token
field is used to secure the model container url
from unauthorized access and is strongly
suggested to use if the containers are exposed to external networks.