Skip to main content
. Not present in production builds. Safe no-op in prod. */ body { padding: 0 !important; } /* Colors Used #3CB540 - Jade Green #2b9a66 - Light Green #18794E - Dark Green Complementary Greens See https://coolors.co/004225-1a794e-08a045-3cb540-62ba4f #004225 - Deep Forrest #1A794E - Turf Green #08A045 - Medium Jungle #3CB540 - Jade Green #6BBF59 - Moss Green See https://coolors.co/0c0c0c-073b3a-1a794e-08a045-6bbf59 #0C0C0C - Onyx Black #073B3A - Dark Teal #1A794E - Turf Green #08A045 - Medium Jungle #6BBF59 - Moss Green See https://coolors.co/fffffa-073b3a-1a794e-08a045-6bbf59 #FFFFFA - Porcelain #073B3A - Dark Teal #1A794E - Turf Green #08A045 - Medium Jungle #6BBF59 - Moss Green Pink Offset Colour See https://coolors.co/073b3a-1a794e-f61067-08a045-6bbf59 #F61067 - Razzmatazz Pink #073B3A - Dark Teal #1A794E - Turf Green #08A045 - Medium Jungle #6BBF59 - Moss Green */ /* ============================================ GLOBAL THEME VARIABLES Component governance source of truth ============================================ */ :root { --lp-color-accent: #3cb540; --lp-color-accent-strong: #18794e; --lp-color-accent-soft: #6bbf59; --lp-color-accent-bright: #5dd662; --lp-color-accent-brightest: #a0f0a5; --lp-color-arbitrum: #3ea6f8; --lp-color-text-primary: #181c18; --lp-color-text-secondary: #717571; --lp-color-text-muted: #9ca3af; --lp-color-bg-page: #ffffff; --lp-color-bg-card: #f9fafb; --lp-color-bg-elevated: #f3f6f4; --lp-color-bg-subtle: rgba(24, 28, 24, 0.04); --lp-color-bg-overlay: rgba(12, 12, 12, 0.5); --lp-color-border-default: #e5e7eb; --lp-color-border-strong: rgba(24, 28, 24, 0.18); --lp-color-border-inverse: rgba(255, 255, 255, 0.5); --lp-color-on-accent: #ffffff; --lp-color-link: #18794e; --lp-color-link-hover: #004225; --lp-color-brand-discord: #5865f2; --lp-color-brand-forum: #00aeef; --lp-color-brand-github: #181c18; --lp-color-brand-x: #181c18; --lp-color-brand-globe: #00c0ff; --lp-color-brand-twitch: #9048ff; --lp-color-brand-youtube: #ff0034; --lp-color-brand-instagram: #dc2275; --lp-color-brand-linkedin: #0189df; --lp-color-brand-preview: #b636dd; --lp-color-brand-coming-soon: #ef1a73; --lp-color-status-good: #22c55e; --lp-color-status-warn: #fbbf24; --lp-color-status-bad: #ef4444; --lp-spacing-1: 0.25rem; --lp-spacing-2: 0.5rem; --lp-spacing-3: 0.75rem; --lp-spacing-4: 1rem; --lp-spacing-6: 1.5rem; --lp-spacing-8: 2rem; --lp-font-sans: 'Inter', 'Segoe UI', sans-serif; --lp-font-mono: 'SFMono-Regular', 'SF Mono', 'Menlo', monospace; --lp-radius-sm: 0.25rem; --lp-radius-md: 0.5rem; --lp-radius-lg: 0.75rem; --lp-shadow-card: 0 8px 24px rgba(24, 28, 24, 0.08); --lp-z-base: 1; --lp-z-overlay: 10; --lp-z-modal: 50; /* Legacy aliases maintained during migration */ --accent: var(--lp-color-accent); --accent-dark: var(--lp-color-accent-strong); --hero-text: var(--lp-color-text-primary); --text: var(--lp-color-text-secondary); --text-secondary: var(--lp-color-text-secondary); --muted-text: var(--lp-color-text-muted); --background: var(--lp-color-bg-page); --card-background: var(--lp-color-bg-card); --background-highlight: var(--lp-color-bg-subtle); --border: var(--lp-color-border-default); --button-text: var(--lp-color-on-accent); --page-header-description-color: var(--lp-color-text-secondary); --arbitrum: var(--lp-color-arbitrum); } .dark { --lp-color-accent: #2b9a66; --lp-color-accent-strong: #18794e; --lp-color-accent-soft: #3cb540; --lp-color-accent-bright: #5dd662; --lp-color-accent-brightest: #7fe584; --lp-color-text-primary: #e0e4e0; --lp-color-text-secondary: #a0a4a0; --lp-color-text-muted: #6b7280; --lp-color-bg-page: #0d0d0d; --lp-color-bg-card: #1a1a1a; --lp-color-bg-elevated: #141a16; --lp-color-bg-subtle: rgba(255, 255, 255, 0.1); --lp-color-bg-overlay: rgba(0, 0, 0, 0.5); --lp-color-border-default: #333333; --lp-color-border-strong: rgba(255, 255, 255, 0.3); --lp-color-border-inverse: rgba(255, 255, 255, 0.5); --lp-color-on-accent: #ffffff; --lp-color-link: #5dd662; --lp-color-link-hover: #a0f0a5; --lp-color-brand-github: #f0f0f0; /* Legacy aliases maintained during migration */ --accent: var(--lp-color-accent); --accent-dark: var(--lp-color-accent-strong); --hero-text: var(--lp-color-text-primary); --text: var(--lp-color-text-secondary); --text-secondary: var(--lp-color-text-secondary); --muted-text: var(--lp-color-text-muted); --background: var(--lp-color-bg-page); --card-background: var(--lp-color-bg-card); --background-highlight: var(--lp-color-bg-subtle); --border: var(--lp-color-border-default); --button-text: var(--lp-color-on-accent); --page-header-description-color: var(--lp-color-text-secondary); --arbitrum: var(--lp-color-arbitrum); } /* ============================================ */ /* Code block themes hiki codeblock themes: Popular Dark Themes: github-dark (what you have now) github-dark-dimmed github-dark-high-contrast dracula dracula-soft monokai nord one-dark-pro poimandres rose-pine everforest-dark vitesse-dark Popular Light Themes: github-light (what you have now) github-light-high-contrast solarized-light rose-pine-dawn everforest-light vitesse-light */ /* img[alt="dark logo"], img[alt="light logo"] { max-width: 180px; } */ /* V2 TEST */ /* a.nav-tabs-item[href="/pages/resources/resources_hub.mdx"], a.nav-tabs-item[href="/pages/08_help/README"] { color: rgba(255, 90, 90, 0.342) !important; } */ /* Make the nav-tabs container full width */ .nav-tabs { width: 100%; justify-content: flex-start; } /* Fix Mintlify content width and centering. Regular pages: balance padding + widen inner cap. Portal/frame pages: balance padding (smaller) + widen inner cap for full-width hero. */ @media (min-width: 1024px) { /* Regular pages */ #content-container:not(:has(.frame-mode-hero-full)):not( :has(.frame-mode-container) ) { padding-left: 3rem !important; padding-right: 3rem !important; } #content-container:not(:has(.frame-mode-hero-full)):not( :has(.frame-mode-container) ) > .max-w-5xl { max-width: 72rem !important; } /* Portal/frame pages — tighter balanced padding, wider inner cap */ #content-container:has(.frame-mode-hero-full), #content-container:has(.frame-mode-container) { padding-left: 2rem !important; padding-right: 2rem !important; } #content-container:has(.frame-mode-hero-full) > .max-w-5xl, #content-container:has(.frame-mode-container) > .max-w-5xl { max-width: 80rem !important; } } #navbar > div.z-10.mx-auto.relative > div.hidden.lg\:flex.px-12.h-12 > div { column-gap: 2rem !important; } a.nav-tabs-item[href*='/internal/'] { margin-left: 1rem; margin-right: -1rem; padding-right: 0; border-bottom-color: transparent !important; } /* .gap-x-6 { column-gap: 2rem !important; } */ /* .nav-tabs h-full flex text-sm gap-x-6 { column-gap: 2rem !important; } */ /* Push Resource HUB to the right and style as outlined button */ a.nav-tabs-item[href$='/resources/redirect'], a.nav-tabs-item[href$='/resources/portal'], a.nav-tabs-item[href$='/07_resources/redirect'], a.nav-tabs-item[href$='/07_resources/portal'] { margin-left: auto; background-color: transparent; border: 1px solid var(--accent) !important; padding: 4px 8px; border-radius: 4px; font-size: 0.7rem; height: auto !important; align-self: center; margin-right: -2rem; } /* Color the text */ /* a.nav-tabs-item[href="/v2/resources/resources_hub"] { color: #2b9a66 !important; } */ /* Shrink & color the icon */ a.nav-tabs-item[href$='/resources/redirect'] svg, a.nav-tabs-item[href$='/resources/portal'] svg, a.nav-tabs-item[href$='/07_resources/redirect'] svg, a.nav-tabs-item[href$='/07_resources/portal'] svg, a.nav-tabs-item[href$='/07_resources/resources_hub'] svg { height: 0.75rem; width: 0.75rem; /* background-color: #2b9a66 !important; */ } /* Hide the underline on the button */ a.nav-tabs-item[href$='/resources/redirect'] > div:last-child, a.nav-tabs-item[href$='/resources/portal'] > div:last-child, a.nav-tabs-item[href$='/07_resources/redirect'] > div:last-child, a.nav-tabs-item[href$='/07_resources/portal'] > div:last-child, a.nav-tabs-item[href$='/07_resources/resources_hub'] > div:last-child { display: none; } /* Stack footer links vertically */ #footer .flex-col .flex.gap-4 { flex-direction: column !important; gap: 0rem !important; } /* Reduce footer padding */ #footer > div { padding-top: 2rem !important; padding-bottom: 1rem !important; } /* Accessibility: prevent hidden assistant sheet from receiving focus */ #chat-assistant-sheet[aria-hidden='true'] { display: none !important; } /* Accessibility: ensure CTA buttons meet minimum target size */ button.text-left.text-gray-600.text-sm.font-medium { min-height: 24px; padding-top: 4px; padding-bottom: 4px; } /* #footer > div > div:first-child { display: flex; flex-direction: row !important; color: red !important; } #footer > div > div:first-child > div { display: flex; flex-direction: row !important; color: green !important; } */ /* Fix bad styling of cards with arrows */ [data-component-part='card-content-container'] { padding-right: 2.5rem; /* Creates space for the arrow */ } /* Reposition View component dropdown */ /* To find the correct selector: 1. Open your page with View components in the browser 2. Right-click on the dropdown in the top-right corner 3. Select "Inspect Element" 4. Find the class name or data attribute 5. Replace the selector below with the actual one */ /* Common possible selectors - uncomment and adjust the one that works */ /* Option 1: If it has a data attribute */ /* [data-view-dropdown] { position: relative !important; top: 60px !important; right: 20px !important; } */ /* Option 2: If it's in a fixed container */ /* .fixed [class*="view"] { position: relative !important; top: 60px !important; } */ /* Option 3: Target by position (fixed elements in top-right) */ /* .fixed.top-0.right-0 [class*="select"], .fixed.top-0.right-0 [class*="dropdown"] { position: relative !important; top: 60px !important; margin-right: 20px !important; } */ /* Option 4: Move it inline with content instead of fixed position */ /* Replace 'ACTUAL_SELECTOR' with the real class name from browser inspection */ /* ACTUAL_SELECTOR { position: static !important; display: inline-block !important; margin-bottom: 20px !important; } */ .code-block > div > div > svg { background-color: #18794e !important; } /* Error 404 Styling */ #error-description > span > div > div { border: 1px solid #18794e !important; } body > div.relative.antialiased.text-gray-500.dark\:text-gray-400 > div.peer-\[\.is-not-custom\]\:lg\:flex.peer-\[\.is-custom\]\:\[\&\>div\:first-child\]\:\!hidden.peer-\[\.is-custom\]\:\[\&\>div\:first-child\]\:sm\:\!hidden.peer-\[\.is-custom\]\:\[\&\>div\:first-child\]\:md\:\!hidden.peer-\[\.is-custom\]\:\[\&\>div\:first-child\]\:lg\:\!hidden.peer-\[\.is-custom\]\:\[\&\>div\:first-child\]\:xl\:\!hidden > div.flex.flex-col.items-center.justify-center.w-full.max-w-lg.overflow-x-hidden.mx-auto.py-48.px-5.text-center.\*\:text-center.gap-y-8.not-found-container > div { margin-top: -5rem; } #error-description > span > div > div > div.relative.rounded-xl.overflow-hidden.flex.justify-center > img { width: 500px; aspect-ratio: 4 / 3; object-fit: cover; /* border: 1px solid #fff; */ } /* Step List Color Icons Styling */ /* #content > div.steps > div > div.absolute.ml-\[-13px\].py-2 > div { background-color: #18794e !important; } */ /* Step List Color Titles */ #content > div.steps.ml-3\.5.mt-10.mb-6 > div > div.w-full.overflow-hidden.pl-8.pr-px > p { color: #2b9a66 !important; } /* View Dropdown */ /* #radix-_R_5slubt9fen9fdb_ */ /* Turn off bg-white in dark mode for multi-view dropdown (PALM THEME BUG) */ .dark .bg-white\/\[0\.95\].multi-view-dropdown-trigger { background-color: transparent !important; background: none !important; } /* Sidebar collapse button - bigger and easier to click */ /* #sidebar button.absolute { min-width: 2.5rem !important; min-height: 2.5rem !important; padding: 0.75rem !important; z-index: 100 !important; } */ /* Override US flag with UK flag in language selector */ /* Hide the original img and use background-image instead */ /* #localization-select-trigger img[alt="US"], #localization-select-item-en img[alt="US"], img[alt="US"][src*="flags/US.svg"] { opacity: 0 !important; position: relative !important; } #localization-select-trigger img[alt="US"]::before, #localization-select-item-en img[alt="US"]::before, img[alt="US"][src*="flags/US.svg"]::before { content: "" !important; position: absolute !important; top: 0 !important; left: 0 !important; width: 100% !important; height: 100% !important; background-image: url("/snippets/assets/media/images/site/united-kingdom-flag-icon.svg") !important; background-size: cover !important; background-position: center !important; border-radius: 50% !important; opacity: 1 !important; } */ /* Hide the panel on frame mode pages (MINTLIFY SUCKS) */ /* Hide empty table of contents layout only when it's empty */ #table-of-contents-layout:empty, #content-side-layout:has(#table-of-contents-layout:empty) { display: none; } /* DynamicTable: force fixed layout so columnWidths prop values take effect. Mintlify's Tailwind prose resets table-layout to auto — !important required. */ [data-docs-dynamic-table] { table-layout: fixed !important; } /* StyledTable should sit flush inside its own border shell. Mint wraps rendered tables in a scroll container with vertical padding, which creates a false gap above/below the header row. */ [data-docs-styled-table-shell] > div { padding-top: 0 !important; padding-bottom: 0 !important; margin-top: 0 !important; margin-bottom: 0 !important; } /* BorderedBox should own its internal spacing. Trim default block margins on the first/last rendered child so headings and paragraphs do not add a false gap inside the padded shell. */ [data-docs-bordered-box] > :first-child { margin-top: 0 !important; } [data-docs-bordered-box] > :last-child { margin-bottom: 0 !important; } [data-docs-bordered-box][data-accent-bar]::before { content: ""; position: absolute; top: 0; bottom: 0; left: 0; width: 4px; background-color: var(--accent-bar-color); border-radius: inherit; border-top-right-radius: 0; border-bottom-right-radius: 0; } /* Frame mode container - 80% of #content-container width, centered */ /* Breaks out of #content padding to center in full #content-container */ .frame-mode-container { width: calc(100% + 96px + 20px); /* 976px */ margin-left: -96px; margin-right: -20px; margin-bottom: 2rem; padding-left: 15%; /* Adjust this for desired content width */ padding-right: 15%; /* Adjust this for desired content width */ box-sizing: border-box; } /* Frame mode container inside hero - already broken out, so reset */ .frame-mode-hero-full .frame-mode-container { width: 100%; margin-left: 0; margin-right: 0; padding-left: 0%; padding-right: 0%; } /* Pagination on frame mode pages ONLY - match container padding */ [data-page-mode='frame'] #pagination { width: calc(100% + 96px + 20px); margin-left: -96px; margin-right: -20px; padding-left: calc((100% + 96px + 20px) * 0.1 + 96px); padding-right: calc((100% + 96px + 20px) * 0.1 + 20px); box-sizing: border-box; } /* Hero full width - breaks out of #content padding to fill #content-container */ .frame-mode-hero-full { width: calc(100% + 96px + 20px); margin-left: -96px; margin-right: -20px; position: relative; } @media (max-width: 1023px) { .frame-mode-container { width: 100%; margin-left: 0; margin-right: 0; padding-left: 1rem; padding-right: 1rem; } [data-page-mode='frame'] #pagination { width: 100%; margin-left: 0; margin-right: 0; padding-left: 1rem; padding-right: 1rem; } .frame-mode-hero-full { width: 100%; margin-left: 0; margin-right: 0; } } #starfield { position: absolute; inset: 0; width: 100%; height: 100%; pointer-events: none; z-index: 0; } /* Target the card content container */ .frame-mode-hero-full [data-component-part='card-content-container'] { padding-top: 0.5rem; padding-bottom: 0.5rem; padding-left: 1rem; padding-right: 2.5rem; /* Space for arrow icon (0.75rem right + icon width ~1rem + margin) */ } /* Target the arrow icon */ .frame-mode-hero-full #card-link-arrow-icon { top: 0.75rem; right: 0.75rem; } /* #content > div.frame-mode-hero-full > div.frame-mode-container > div > div:nth-child(2) > div > div > div:nth-child(4) > a > div { padding-top: 0.5rem; padding-bottom: 0.5rem; } #content > div.frame-mode-hero-full > div.frame-mode-container > div > div:nth-child(2) > div > div > div:nth-child(4) > a > div > #card-link-arrow-icon { top: 0.75rem; right: 0.75rem; } */
Finding terms quickly
  • Cmd+K (Mac) / Ctrl+K (Windows) — search all Livepeer docs
  • Cmd+F (Mac) / Ctrl+F (Windows) — search within this page
  • Use the category filter below to narrow by topic
Machine-readable term index: glossary-data.json
Terms developers encounter across Livepeer’s SDKs, AI Gateway API, streaming protocols, and protocol integration layer.

Livepeer Protocol Terms

Definition: REST API endpoint layer for routing AI inference requests through Livepeer’s gateway nodes to GPU orchestrators on the network.Context: The AI Gateway API is the primary integration surface for developers submitting AI pipeline requests — text-to-image, live-video-to-video, LLM chat, etc. — to the decentralised Livepeer network without managing infrastructure directly.Status: draftPages: developers/ai-gateway, developers/api
Definition: Deployment pattern where teams supply custom Docker containers for AI workloads, enabling arbitrary Python-based models to run on the Livepeer network.Context: BYOC is the Livepeer mechanism that allows builders and orchestrators to deploy containerised AI workloads — including custom pipelines not natively supported — on network compute.Status: currentPages: developers/compute, developers/pipelines
Definition: Strategic vision for Livepeer to become the leading platform for real-time AI video pipelines, and the associated protocol upgrade enabling the AI inference subnet.Context: Cascade is Livepeer’s named strategic phase introducing AI inference as a first-class use case alongside transcoding, activating the AI subnet and GPU orchestrator market.Status: currentPages: developers/protocol
Definition: Livepeer project running ComfyUI workflows as a real-time media processing backend for live streams.Context: ComfyStream is a Livepeer-maintained integration layer that translates ComfyUI node graphs into streaming AI pipelines compatible with the network’s live-video-to-video pipeline, enabling GPU orchestrators to serve real-time diffusion transforms.Status: currentPages: developers/pipelines, developers/ai-video
Definition: Livepeer’s hosted real-time AI video platform turning live camera input into AI-transformed visuals with sub-second latency.Context: Daydream is the flagship consumer-facing product demonstrating Livepeer’s real-time AI video capabilities, used by developers as a reference implementation and deployment target for live-video-to-video pipelines.Status: currentPages: developers/ai-video, developers/use-cases
Definition: Token holder who stakes LPT to an orchestrator to secure the network, participate in governance, and earn rewards.Context: In the developer context, understanding delegators is relevant when reasoning about LPT staking mechanics and the economic security of the network backing developer workloads.Status: currentPages: developers/protocol
Definition: Set of SDKs, APIs, UI components, and hosted services for integrating video and AI capabilities into applications.Context: The Developer Stack is Livepeer’s collective offering for builders — encompassing Livepeer Studio, the AI Gateway API, Livepeer.js, and PyTrickle — enabling video and AI features without managing protocol infrastructure.Status: currentPages: developers/index, developers/quickstart
Definition: Pattern in PyTrickle for building real-time video processing applications with custom per-frame processing logic.Context: FrameProcessor is the Livepeer-defined interface in the PyTrickle SDK that developers implement to apply AI transforms to individual video frames within a live stream pipeline.Status: currentPages: developers/pipelines, developers/ai-video
Definition: Node that submits jobs, routes work to orchestrators, manages payment flows, and provides a protocol interface for developers.Context: In the developer context, a Gateway is the on-network relay between a developer’s application (sending AI or transcoding requests) and the orchestrators performing compute — abstracting payment channels and orchestrator selection.Status: currentPages: developers/ai-gateway, developers/architecture
Definition: Official Go implementation of the Livepeer protocol containing Broadcaster, Orchestrator, Transcoder, Gateway, and Worker roles in a single binary.Context: go-livepeer is the canonical node software that developers running protocol-level infrastructure compile and configure; it is the reference implementation against which all SDK behaviour is defined.Status: currentPages: developers/protocol, developers/node
Definition: An orchestrator operator contributing GPU hardware and AI model capacity to the Livepeer network for inference workloads.Context: GPU Operators are the supply side of the Livepeer AI market; developers routing requests through the AI Gateway API ultimately land on GPU Operator infrastructure.Status: currentPages: developers/compute, developers/node
Definition: Livepeer Improvement Proposal — formal design document describing a proposed change to the protocol, governance process, or ecosystem standard.Context: LIPs are the official mechanism through which protocol changes to Livepeer are proposed, debated, and ratified by stakeholders; developers working at the protocol level reference LIPs for specification authority.Status: currentPages: developers/protocol, developers/governance
Definition: Platform by the Livepeer Cloud SPE increasing network accessibility with a free community AI gateway and managed developer services.Context: Livepeer Cloud is a hosted offering making it easier for developers to access Livepeer’s AI and video infrastructure without self-hosting gateway nodes.Status: currentPages: developers/index, developers/api
Definition: Developer platform for adding live and on-demand video experiences to applications, providing stream management, asset storage, analytics, and billing.Context: Livepeer Studio is the primary hosted developer product; it exposes the REST API and dashboard that most developers use to create streams, upload assets, configure access control, and monitor usage.Status: currentPages: developers/index, developers/api
Definition: JavaScript library for the Livepeer API providing programmatic access to Studio features including stream and asset management.Context: Livepeer.js is the official JavaScript/TypeScript SDK for interacting with Livepeer Studio’s REST API, designed for Node.js and browser environments.Status: currentPages: developers/sdks, developers/api
Definition: An open-source Python reference implementation of a Livepeer gateway, enabling job submission, payment flow management, and pipeline routing from Python applications.Context: Maintained by the community as a lightweight alternative for developers building Python-native integrations with the Livepeer network.Status: currentPages: developers/sdks, developers/ai-gateway
Definition: ERC-20 governance and staking token used for orchestrator selection, delegation, reward distribution, and network security.Context: LPT is the native utility token of the Livepeer protocol; developers encounter it when understanding the economic layer backing the compute they consume — staked LPT determines which orchestrators process their requests.Status: currentPages: developers/protocol, developers/staking
Definition: Network-as-a-Product — the framing of delivering the Livepeer Network as a reliable, SLA-backed product layer with improved orchestrator selection and accessibility.Context: NaaP is Livepeer’s strategic positioning that developers build on, treating the decentralised network as a dependable infrastructure product rather than a raw peer-to-peer substrate.Status: currentPages: developers/protocol, developers/architecture
Definition: Supply-side operator contributing GPU resources, receiving jobs, performing transcoding or AI inference, and earning rewards from the Livepeer protocol.Context: Orchestrators are the network nodes that actually execute the compute underlying developer API calls; understanding orchestrators helps developers reason about latency, availability, and pricing in the network.Status: currentPages: developers/architecture, developers/protocol
Definition: On-chain layer governing staking, delegation, rewards, and verification via smart contracts deployed on Arbitrum.Context: The Protocol Layer is the blockchain foundation underpinning Livepeer — developers building at the application level interact with it indirectly through the SDK, while protocol developers interact with it directly via smart contracts.Status: currentPages: developers/protocol, developers/architecture
Definition: Python package for real-time video and audio streaming with custom processing, built on the Livepeer Trickle protocol.Context: PyTrickle is the official Python SDK for developers building real-time AI video applications on Livepeer, providing the FrameProcessor interface for per-frame model inference.Status: currentPages: developers/sdks, developers/streaming
Definition: On-chain transaction an active orchestrator submits each protocol round to mint and distribute new LPT inflation rewards to itself and its delegators.Context: Developers building tooling or monitoring for the protocol layer encounter reward calls as the primary mechanism by which inflationary LPT is distributed each round.Status: currentPages: developers/protocol, developers/staking
Definition: Special Purpose Entity — a treasury-funded organisational unit with a defined scope, fixed budget, and accountability structure for executing ecosystem workstreams.Context: SPEs are Livepeer’s primary funding and execution vehicle for ecosystem development; developers building tooling or integrations may encounter SPEs when understanding how Livepeer Foundation allocates resources to projects.Status: currentPages: developers/governance, developers/protocol
Definition: Project building the video layer for decentralised social platforms, focused on the AT Protocol ecosystem.Context: StreamPlace is a Livepeer ecosystem project demonstrating how the network’s streaming infrastructure can underpin decentralised social video applications.Status: currentPages: developers/ai-video, developers/use-cases
Definition: On-chain pool of LPT and ETH governed by token holders via the LivepeerGovernor contract, used to fund public goods and ecosystem development.Context: The Treasury is the protocol-level funding mechanism for Livepeer ecosystem work; developers building tooling or integrations may apply for treasury grants via the governance process.Status: currentPages: developers/protocol, developers/governance
Definition: Low-latency HTTP-based streaming protocol for real-time media transport between Livepeer nodes, enabling frame-level AI processing on live streams.Context: The Trickle Streaming Protocol is the Livepeer-native transport layer underpinning PyTrickle and the live-video-to-video pipeline, enabling sub-segment-level media delivery for real-time AI transforms.Status: currentPages: developers/streaming, developers/architecture
Definition: Guided learning path taking a developer from no prior knowledge of Livepeer to competent ecosystem participation through structured tutorials and exercises.Context: Zero-to-Hero is Livepeer’s flagship onboarding tutorial series for new developers, providing step-by-step guides covering SDK setup, stream creation, AI pipeline integration, and protocol fundamentals.Status: currentPages: developers/guides, developers/quickstart

AI Terms

Definition: Running a trained model on a group of inputs asynchronously, optimising GPU utilisation through parallelisation.External: Google Cloud — What is batch inference?Status: currentPages: developers/ai-gateway, developers/pipelines
Definition: Latency incurred when a model must be loaded from storage into GPU memory before the first inference request, often ranging from 5 to 90 seconds.Also known as: Cold modelExternal: OpenMetal — Cold start latency in private AI inferenceStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Open-source node-based graphical interface for building and executing AI image and video generation workflows.External: ComfyUI on GitHubStatus: currentPages: developers/pipelines, developers/ai-video
Definition: Generative models learning to produce data by reversing a gradual noising process applied during training.External: Wikipedia — Diffusion modelStatus: currentPages: developers/pipelines, developers/ai-video
Definition: Learned numerical vector representation in continuous space where similar items map to nearby points, enabling semantic search and cross-modal reasoning.External: Wikipedia — Word embeddingStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: An AI platform and open-source community providing model repositories, datasets, and inference APIs; a primary source for AI models deployed on Livepeer orchestrator nodes.Also known as: Hugging Face, HFExternal: HuggingFaceStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI pipeline transforming an input image into a modified output image guided by a text prompt or conditioning signal.External: Wikipedia — Image translationStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI pipeline generating a textual description from an input image, encompassing captioning and optical character recognition.External: Hugging Face — Image-to-Text taskStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI pipeline generating a short video clip conditioned on a single input image, animating a still frame.External: Hugging Face — Image-to-Video taskStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Running a trained model on new input data to produce predictions or generated output, as opposed to training the model.External: Wikipedia — Inference engineStatus: currentPages: developers/ai-gateway, developers/pipelines
Definition: AI pipeline applying generative models to a continuous video stream frame-by-frame at interactive frame rates.External: StreamDiffusion GitHubStatus: currentPages: developers/pipelines, developers/ai-video
Definition: Large language model — neural network trained on massive text corpora to understand and generate natural language.External: Wikipedia — Large language modelStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Low-Rank Adaptation — parameter-efficient fine-tuning technique injecting trainable low-rank matrices into transformer layers to specialise a model without full retraining.External: Hugging Face — LoRA trainingStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Mathematical structure (neural network with learned weights) enabling predictions or generation for new inputs, identified by a model ID and pipeline type.External: Wikipedia — Machine learningStatus: currentPages: developers/pipelines
Definition: Standardised documentation describing a model’s intended use, training data, evaluation metrics, and known limitations.External: Hugging Face — Model CardsStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Unique string identifier specifying which AI model to invoke on a repository hub, for example stabilityai/stable-diffusion-xl-base-1.0.External: Hugging Face — Model CardsStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI systems capable of processing and integrating multiple data types — such as text, images, audio, and video — for cross-modal understanding and generation.External: Wikipedia — Multimodal learningStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Open-source tool for running large language models locally with a CLI and OpenAI-compatible REST API.External: OllamaStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Open-source deep learning framework providing GPU-accelerated tensor computation and automatic differentiation, developed by Meta.Also known as: TorchExternal: Wikipedia — PyTorchStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Running AI models on live streaming input with latency low enough for interactive speeds, typically under 100 milliseconds.External: Ultralytics — Real-time inferenceStatus: currentPages: developers/ai-video, developers/pipelines
Definition: Meta’s unified foundation model for promptable segmentation in images and videos with streaming memory, enabling interactive region selection.External: Hugging Face — SAM 2Status: currentPages: developers/pipelines, developers/ai-gateway
Definition: Algorithm controlling the denoising process in diffusion models by defining the noise schedule and update rule for each generation step.External: Hugging Face — Scheduler featuresStatus: currentPages: developers/pipelines
Definition: Stable Diffusion XL — advanced text-to-image model with a 3× larger UNet and dual text encoders, generating images at 1024×1024 resolution.External: Hugging Face — SDXLStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI task partitioning a digital image into regions by assigning a semantic label to every pixel, identifying and outlining objects.Also known as: Segmentation (AI)External: Wikipedia — Image segmentationStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Optimised real-time diffusion pipeline using stream batching and stochastic similarity filtering to reduce latency for live video transforms.External: StreamDiffusion on GitHubStatus: currentPages: developers/pipelines, developers/ai-video
Definition: Stability AI’s latent diffusion model generating 14–25 frame video clips at 576×1024 resolution conditioned on a single input image.External: Hugging Face — Stable Video DiffusionStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: NVIDIA’s inference SDK optimising models through quantisation, layer fusion, and kernel auto-tuning for low-latency GPU inference.External: NVIDIA TensorRTStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI pipeline generating an image from a natural language text prompt using a language encoder and a diffusion model.External: Wikipedia — Text-to-image modelStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI pipeline synthesising spoken audio from written text using phonetic conversion and audio synthesis models.External: Wikipedia — Speech synthesisStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Open-source deep learning framework providing GPU-accelerated tensor computation and automatic differentiation, developed by Meta.Also known as: PyTorchExternal: Wikipedia — PyTorchStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: AI model already loaded into GPU memory and ready to serve inference requests immediately, without cold-start delay.External: OpenMetal — Cold start latency in private AI inferenceStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: OpenAI’s encoder-decoder transformer for speech recognition and translation, pre-trained on 680,000 hours of multilingual audio.External: Hugging Face — WhisperStatus: currentPages: developers/pipelines, developers/ai-gateway
Definition: Neural network representing and predicting environment dynamics, enabling an AI agent to plan by simulating future outcomes.External: Wikipedia — Generative artificial intelligenceStatus: currentPages: developers/ai-video, developers/use-cases

Video Terms

Definition: HTTP Live Streaming protocol by Apple that encodes video into multiple quality levels with an index playlist for adaptive bitrate delivery.External: Wikipedia — HTTP Live StreamingStatus: currentPages: developers/streaming, developers/playback
Definition: A system characteristic where the delay between an event occurring and a response being delivered is minimised; in Livepeer, sub-500ms round-trip times are targeted for real-time AI video pipelines.Context: Critical for interactive AI video applications — high latency breaks the real-time feedback loop between user input and AI-transformed output.Status: currentPages: developers/ai-video, developers/streaming
Definition: Real-Time Messaging Protocol for streaming audio, video, and data over TCP on port 1935, the standard ingest protocol for live broadcasting.External: Wikipedia — RTMPStatus: currentPages: developers/streaming, developers/ingest
Definition: Video delivery with end-to-end delay under one second, typically achieved via WebRTC’s UDP-based transport.External: Cloudflare — WebRTC WHIP/WHEPStatus: currentPages: developers/streaming, developers/webrtc
Definition: Direct digital-to-digital conversion of video from one encoding format, resolution, or bitrate to another, producing multiple adaptive renditions.External: Wikipedia — TranscodingStatus: currentPages: developers/streaming, developers/protocol
Definition: A media delivery model where recorded video content is stored server-side and streamed to viewers on request at any time, in contrast to live streaming.Also known as: VOD, on-demand videoExternal: Video on demand — WikipediaStatus: currentPages: developers/streaming, developers/api
Definition: Open standard providing browsers and mobile apps with real-time peer-to-peer audio, video, and data exchange without plugins.External: Wikipedia — WebRTCStatus: currentPages: developers/streaming, developers/webrtc

Web3 Terms

Definition: A Layer 2 Optimistic Rollup settling to Ethereum, processing transactions off-chain while inheriting Ethereum-grade security.External: Arbitrum documentationStatus: currentPages: developers/protocol
Definition: LPT holders staking tokens toward orchestrators they trust, sharing in rewards without running infrastructure.Context: Delegation is the mechanism through which passive LPT holders contribute to network security and earn yield without operating nodes, relevant to developer documentation covering the protocol economics layer.Status: currentPages: developers/protocol
Definition: A decentralised, open-source blockchain with smart contract functionality; native cryptocurrency is Ether (ETH).External: Wikipedia — EthereumStatus: currentPages: developers/protocol
Definition: Self-executing program deployed on a blockchain that automatically enforces agreement terms without intermediaries.External: Ethereum — Smart contractsStatus: currentPages: developers/protocol, developers/architecture

Technical Terms

Definition: The primary general-purpose processor in a computer; in Livepeer, CPU handles node software overhead while GPU handles intensive transcoding and AI inference workloads.External: Central processing unit — WikipediaStatus: currentPages: developers/compute, developers/node
Definition: Specific URL path where an API receives and processes requests.External: Wikipedia — Web APIStatus: currentPages: developers/api
Definition: A unit of digital storage equal to 1,073,741,824 bytes (binary); used in Livepeer hardware specifications for RAM, VRAM, and storage requirements.External: Gigabyte — WikipediaStatus: currentPages: developers/compute, developers/node
Definition: A high-level interpreted scripting language used for web and server-side development; Livepeer’s primary SDKs and gateway clients expose JavaScript/TypeScript APIs.Also known as: JSExternal: JavaScript — MDNStatus: currentPages: developers/sdks, developers/api
Definition: JSON Web Token — a compact, URL-safe signed token used to authenticate and authorise requests between parties.External: Wikipedia — JSON Web TokenStatus: draftPages: developers/access-control, developers/api
Definition: NVIDIA’s current consumer GPU product line featuring dedicated Tensor cores that accelerate AI/ML inference workloads; RTX GPUs are well-suited for Livepeer AI pipeline tasks.Also known as: GeForce RTXExternal: NVIDIA GeForce RTXStatus: currentPages: developers/compute, developers/node
Definition: A deployment model in which the operator runs their own infrastructure rather than relying on a managed cloud service; Livepeer gateways and AI nodes can be self-hosted on any compatible hardware.External: Self-hosting — WikipediaStatus: currentPages: developers/node, developers/compute
Definition: Statically-typed, contract-oriented programming language for writing smart contracts on Ethereum and EVM-compatible chains.External: Wikipedia — SolidityStatus: currentPages: developers/protocol, developers/contracts
Definition: Video RAM — dedicated GPU memory used for storing graphics data, model weights, and intermediate tensors during AI inference.External: Wikipedia — Video random-access memoryStatus: currentPages: developers/pipelines, developers/compute
Definition: HTTP callback mechanism that sends a POST request to a configured URL when a specific platform event occurs, enabling event-driven integrations.External: Wikipedia — WebhookStatus: currentPages: developers/api, developers/events

Operational Terms

Definition: Know Your Customer — identity verification process for regulatory compliance, requiring users to provide identifying information before accessing certain features.External: Wikipedia — Know your customerStatus: currentPages: developers/access-control
Definition: A formal commitment between a service provider and a customer defining expected performance levels, uptime guarantees, and remediation obligations.External: Service-level agreement — WikipediaStatus: currentPages: developers/architecture, developers/api

Developer Docs

Start building with Livepeer’s APIs, SDKs, and AI pipelines

Full Glossary

All terms across every Livepeer tab

API Reference

Complete reference for Livepeer’s REST API endpoints
Last modified on April 7, 2026