Generate text using a language model.
ai-video
channel on
Discord.data:
line as it arrives./llm
endpoint returns a single JSON response in the OpenAI
chat/completions
format, as shown in the sidebar.
To receive responses token-by-token, set "stream": true
in the request body. The server will then use Server-Sent Events (SSE) to stream output in real time.
Each streamed chunk will look like:
"finish_reason": "stop"
:
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Successful Response
The response is of type object
.