Page is under construction.
Check the github issues for ways to contribute! Or provide your feedback in this quick form
Check the github issues for ways to contribute! Or provide your feedback in this quick form
AI Workers
AI workers are run when you start a node with the -aiWorker flag. They can run in two modes: Combined with Orchestrator (-orchestrator -aiWorker): The orchestrator also runs AI processing locally Standalone AI Worker (-aiWorker only): Connects to a remote orchestrator via gRPC Key Points: AI workers are the component that actually runs Docker containers starter.go:1345-1349 Gateways only route requests and handle payments; they don’t run containers byoc.go:25-35 BYOC containers are managed by the AI worker’s Docker manager For CPU models, you don’t need the -nvidia flag starter.go:1296-1300BYOC (Bring Your Own Container) Overview
BYOC is Livepeer’s Generic Processing Pipeline that allows you to run custom Docker containers for media processing tasks on the Livepeer network. It enables you to bring your own processing capabilities while integrating with Livepeer’s infrastructure for job distribution, payment, and orchestration.Key Points
- BYOC is NOT just any Docker container - it must implement Livepeer’s processing API
- It runs on Orchestrator nodes, not on-chain or locally by default
- You need an Orchestrator to process BYOC jobs in the network
- It interacts with Gateways for job submission and with Orchestrators for execution
Architecture
Core Components
BYOC consists of two main server types:- BYOCGatewayServer - Handles job submission from clients 1
- BYOCOrchestratorServer - Manages job processing on orchestrators 2
Job Flow
HTTP Endpoints
Gateway endpoints:/process/request/- Submit new processing jobs 3
/process/request/- Process jobs from gateways 4/process/token- Get job tokens/capability/register- Register new capabilities/capability/unregister- Unregister capabilities 5
Interaction with Livepeer Network
With Gateways
Gateways act as the entry point for BYOC jobs:- Receive job requests from clients
- Verify credentials and signatures
- Find suitable orchestrators with the required capability
- Forward signed requests to orchestrators
- Return results to clients 6
With Orchestrators
Orchestrators handle the actual processing:- Receive signed job requests from gateways
- Validate signatures and payments
- Execute jobs in registered containers
- Manage capacity for different capabilities
- Return processed results 7
Do You Need an Orchestrator?
Yes, you need an orchestrator to:- Process BYOC jobs in the network
- Handle payments and ticket validation
- Manage container lifecycle and capacity
- Register capabilities with the network
Container Requirements
What Can You Put in the Container?
Your container must:- Implement Livepeer’s processing API endpoints
- Handle the specific job types you register for
- Be compatible with Docker runtime
- Expose appropriate HTTP endpoints
Container Lifecycle
The container is managed by the orchestrator:- Pulled and started when jobs are submitted
- Stopped when idle (after 3 minutes by default) 8
- Health-checked every 5 seconds 9
- Restarted on failure (up to 3 times)
Where It Runs
BYOC containers run:- On Orchestrator nodes (not on-chain)
- In Docker environment managed by Livepeer
- With GPU allocation if required 10
- With local volume mounts for models 11
Examples
Job Submission Example
Job Request Structure
The job request includes:capability- The processing capability requiredparameters- Job-specific parameters (JSON)timeout_seconds- Maximum execution time- Payment information and signature 12
Configuration Example
Orchestrators can be configured with BYOC support:Network Integration
Capability Registration
Orchestrators register their processing capabilities:- External capabilities are registered via API
- Capacity is tracked per capability
- Jobs are routed to capable orchestrators 14
Payment Integration
BYOC integrates with Livepeer’s payment system:- Uses ticket-based micropayments
- Validates sender signatures
- Debits fees for processing 15
Security
- All requests are signed by the gateway
- Orchestrators verify signatures before processing
- TLS is used for network communication 16
Notes
- BYOC was introduced in v0.8.5 as the “Generic Processing Pipeline” 17
- It’s designed for custom media processing beyond standard transcoding
- The system reuses much of Livepeer’s existing infrastructure for payments and orchestration
- Containers are managed similarly to AI worker containers but for general processing tasks
BYOC Concepts
There are two closely related concepts people call BYOC:-
AI BYOC – External Containers behind AI Workers
- AI Orchestrators define models in an aiModels.json file. For external containers, a model entry includes fields like url, capacity, and optional token. The AI Worker forwards inference requests to that URL instead of a built-in ai-runner container.
- The external container must behave like a normal model container (REST API, /health endpoint, expected HTTP semantics and error codes). Reference: livepeer/ai-runner
-
Protocol BYOC – Generic Processing Pipeline
- Introduced in go-livepeer as the Generic Processing Pipeline (a.k.a. Bring Your Own Container) with follow-up BYOC fixes in later releases.
- Exposes a generic POST /process/request/ path on Gateways that forwards requests to Orchestrators, which then route to arbitrary HTTP services/containers advertising that capability. Reference: livepeer/go-livepeer releases (BYOC-related PRs) and go-livepeer source.
- Gateways lock funds and send probabilistic tickets,
- Orchestrators redeem tickets and earn fees,
- Delegators stake on Orchestrators that provide these capabilities.
Architecture
Actors- Client / Builder app – Calls the Gateway via the AI/Video API or POST /process/request/.
- Gateway (AI or generic) – Routes jobs, holds funds, discovers suitable Orchestrators, sends/receives tickets, enforces pricing and max EV.
- Orchestrator (AI) – Registered on-chain with an AI service URI; advertises pipelines/models (and potentially external containers) via aiModels.json.
- AI Worker / Runner – Runs model containers or forwards to external containers (BYOC).
- BYOC Container(s) – Your Docker images (models, agents, business logic) exposed over HTTP. For AI BYOC these are “external containers”; for generic BYOC they implement named capabilities like pulse.
- livepeer/go-livepeer – core node implementation, including BYOC / Generic Processing Pipeline.
- Roaring30s/livepeer-byoc – minimal example showcasing a pulse capability container and capability registration.
- livepeer/ai-runner – containerized Python application used as the base for AI inference containers.
- livepeer/pytrickle – Python implementation of the HTTP Trickle protocol used to connect real-time AI/video pipelines to the Livepeer stack.
Request Flow
For AI external containers, the Gateway still calls the Orchestrator as usual; the Orchestrator’s AI Worker routes requests to your url instead of an internal model container. For generic BYOC, the Gateway calls POST /process/request/; the Orchestrator routes that capability to your container (example: pulse in livepeer-byoc).Interactions
Gateways
- AI Gateways are funded on-chain (deposit + reserve) and connect to the AI service registry using flags such as -aiServiceRegistry, -network, -ethUrl, -ethAcctAddr, -maxTotalEV, etc.
- For BYOC they:
- Discover Orchestrators that advertise a given pipeline/model or capability.
- Pay via probabilistic tickets as with any AI job.
- Forward generic jobs through POST /process/request/ for BYOC pipelines.
- Livepeer AI Orchestrator model docs: Download AI Models.
- Go SDK / protocol references: Go SDK docs.
Orchestrators
- Must be a top Orchestrator on the main network and run a separate AI Orchestrator with its own AI service URI to earn AI job fees.
- Configure models and containers in aiModels.json. For external containers they set url, capacity, and token and ensure the HTTP API behaves like ai-runner containers (including a /health endpoint).
- For BYOC capabilities (generic pipeline):
- Run go-livepeer with BYOC/Generic Processing Pipeline enabled.
- Use a registration mechanism (e.g. the register-capability container in livepeer-byoc) to advertise capabilities like pulse.
Build a BYO Container
For AI BYOC / External Containers (documented today):-
Write your service
- Container runs whatever stack you like (Python/FastAPI, Node, etc.).
- It exposes a model-like HTTP API: Livepeer AI Worker will call your url the same way it calls its internal containers and expect the same response format & HTTP error semantics.【citeturn19view0】
-
Implement /health
- Worker will hit /health at startup; it must return a 200 OK if the container is ready.【citeturn19view0】
-
Container management
- You can manage containers however you like (single GPU node, K8s, Docker Swarm, Nomad, custom scripts). The only requirement is that the url behaves as a pass-through to the actual model containers.【citeturn19view0】
-
Wire it into aiModels.json
- Add a model entry with pipeline, model_id, price_per_unit, and url, capacity, token for your BYOC container.【citeturn19view0】
- Restart Orchestrator + AI Worker so they read the updated config.
-
Example architecture
- The sample repo spins up: gateway, orchestrator, a simple pulse capability container (Flask app), and a register-capability container to register that capability with the Orchestrator.【citeturn8view1】
-
HTTP contract
- Clients call the Gateway’s /process/request/ endpoint with a Livepeer header containing a base64-encoded JSON job description (fields like run, capability, timeout, params).【citeturn8view1】
- The Orchestrator forwards the call to your container, which returns a JSON status or payload (for pulse, just a “healthy” status).【citeturn8view1】
-
Build & run
- You can follow the docker-compose.yml in that repo as a template: one service for your capability, one to register it, and a local Gateway + Orchestrator pair for testing.【citeturn8view1】
Examples & Repo’s
Repos to study:- livepeer/go-livepeer – core implementation; BYOC lives in the “Generic Processing Pipeline” and related BYOC issues/PRs.【citeturn17search0】
- Roaring30s/livepeer-byoc – minimal end-to-end example of the BYOC pipeline (pulse capability, registration container, local gateway/orch).【citeturn8view1】
- livepeer/ai-runner – base image used by AI Workers; useful to mirror API behaviour when building BYOC containers.【citeturn18search8】
- ai-spe/pytrickle – Python HTTP Trickle library used to bridge containers and Livepeer stack for more advanced BYOC use cases.【citeturn12search4turn12search6】
- Peter Schrödl – Live in Lisbon Summit 2025: explains pytrickle and “bring your own container” wiring between containers and the gateway/orchestrator stack (≈4:44–7:15).【fileciteturn6file10L32-L75】
- Define & Dane Lisbon session: describes how the BYOC shift let them deploy Unreal/game-like pipelines onto the network.【fileciteturn6file8L51-L55】
- Doug & Shannon Lisbon sessions: frame BYOC / “bring your own container or dockerized model” as core to Livepeer AI’s value proposition.【fileciteturn6file11L46-L48turn6file7L41-L44】