Find the information they are looking for - through intuitive navigation, powerful search, and AI assistance
Understand the Livepeer protocol & products - with clear explanations from zero to hero
Navigate the Livepeer ecosystem - discover tools, services, and community resources
Build on the Livepeer protocol - with comprehensive guides, API references, and code examples
Use Livepeer & Ecosystem products - step-by-step tutorials and quickstarts
The documentation utilises a streamlined setup and the Mintlify format, providing a modern, responsive, and accessible experience.This section articulates the philosophical framework that underpinned the v2 engagement and demonstrates, with full implementation evidence, how every element of the RFP’s three stated aims - stakeholder-focused, AI-first, and future-proofed - was delivered across every layer of the system.
The foundational premise of this engagement, articulated before a single page was written, is that documentation is not editorial output.It is infrastructure.This principle reframes how investment in documentation should be understood, measured, and maintained.Infrastructure has properties that editorial content does not: it must be maintained under load, it degrades without active governance, it requires testing, automation, and versioning.The D.O.C.S. System™ developed as the strategic operating model for this engagement formalises this:
Distribution Infrastructure,
Operational Governance,
Composable Execution, and
Signal & System Feedback.
The practical implication is that “documentation work” in this engagement meant building a documentation operating system - a complete apparatus for authoring, validating, publishing, automating, and governing content - not just writing pages.The 58-script test suite, 17 GitHub Actions workflows, lpd CLI, component library, and docs-guide governance system are all infrastructure.They are the scaffolding that makes every page reliable, maintainable, and improvable by anyone.
Episode Details
Inferact is a new AI infrastructure company founded by the creators and core maintainers of vLLM.Its mission is to build a universal, open-source inference layer that makes large AI models faster, cheaper, and more reliable to run across any hardware, model architecture, or deployment environment.Together, they broke down how modern AI models are actually run in production, why “inference” has quietly become one of the hardest problems in AI infrastructure, and how the open-source project vLLM emerged to solve it.The conversation also looked at why the vLLM team started Inferact and their vision for a universal inference layer that can run any model, on any chip, efficiently.
Episode Details
In this episode of AI + a16z, dbt Labs founder and CEO Tristan Handy sits down with a16z’s Jennifer Li and Matt Bornstein to explore the next chapter of data engineering — from the rise (and plateau) of the modern data stack to the growing role of AI in analytics and data engineering.Among other topics, they discuss how automation and tooling like SQL compilers are reshaping how engineers work with data; dbt’s new Fusion Engine and what it means for developer workflows; and what to make of recent data-industry acquisitions and ambitious product launches.
Episode Details
Talks about Documentation as Infrastructure and how good documentation ensures AI surfaces the product, regardless of product capability.
The D.O.C.S principles focus on creating high-quality, effective, and user-focused technical content by ensuring it is clear, concise, comprehensive, and consistent.Key principles include writing from the user’s perspective, using plain language, keeping documentation up-to-date, making it skimmable with structured formatting, and providing concrete, actionable examples.
Clear & Concise: Use simple language to explain complex ideas, avoiding jargon. Get to the point quickly and remove unnecessary information.
Comprehensive & Consistent: Cover all necessary information (endpoints, variations, edge cases) and maintain consistent formatting and terminology throughout.
Structured & Skimmable: Use headings, subheadings, lists, and tables to make content easy to navigate. Place the most important information first.
User-Focused: Write from the reader’s perspective, focusing on their tasks and goals instead of just technical features.
Accurate & Updated: Regularly review and update documentation to reflect the current state of the product.
Concrete & Interactive: Include real-world examples, code snippets, and tutorials to help users immediately apply the information.
The Diátaxis framework is a systematic approach that organizes documentation into four distinct quadrants based on two axes: Action vs. Reflection and Learning vs. Working.
Tutorials (Learning-Oriented): Hands-on lessons that guide a beginner through a series of steps to achieve a result. Their primary goal is to provide a successful learning experience, beyond solve a problem.
How-To Guides (Task-Oriented): Practical directions that help an experienced user complete a specific, real-world task. They focus on the “how” and assume the user already has basic competence.
Reference (Information-Oriented): Technical descriptions of the machinery-API keys, classes, commands, and schemas. They must be neutral, accurate, and easy to consult quickly.
Explanation (Understanding-Oriented): Discussions that clarify and illuminate a particular topic. They provide context, background, and rationale (“the why”) instead of instructions.
The core principle is to keep these four types separate. Mixing them — such as putting long technical explanations inside a step-by-step tutorial — confuses the reader and makes the documentation harder to maintain.References:
For over a decade, SEO - optimising for Google - was the primary lever for documentation discoverability.That paradigm is being rapidly displaced.LLMs now index, summarise, and recommend infrastructure products at scale. Developers increasingly begin their research with ChatGPT, Claude, Perplexity, or Gemini instead of a search engine.This means documentation is now being read, evaluated, and cited by machines before it reaches humans.The implication is structural, not cosmetic.Semantic headings, stable canonical URLs, machine-readable frontmatter, OpenAPI spec integration, AEO, AI-integrated pipelines and llms.txt files are not “nice to have” - they are the new minimum viable distribution layer.Answer Engine Optimization (AEO) is the evolution of SEO for this era. This engagement built Livepeer’s AEO layer from first principles.
The next wave beyond AI-assisted search is AI agent consumption: automated systems that query documentation to generate code, answer support tickets, configure infrastructure, or build integrations without human intervention.These agents require documentation that is not merely readable but executable - with explicit preconditions, invariants, verification steps, and failure modes.It requires specifically serving content designed for agent consumption and context windows.The “Get AI to Set Up the Gateway” page is a direct instantiation of this principle: a page written for AI agent consumption, not human browsing.The llms.txt file, repository AGENTS.md, and the native adapter files under .github/, .claude/, .cursor/, and .windsurf/ extend this philosophy into the development workflow itself - making the entire repository legible to AI coding assistants.See the 21 items for AI-First Docs implemented in the AI-First Report