CHABOT.DEV — A FIELD JOURNAL — VOLUME I, NO. 4

16    SECTION   ✣

DevRel in the AI Era.

A dedicated section. The 2023–2026 wave of LLMs, AI coding agents, and agent-mediated discovery has changed what Developer Relations does, who its audience is, and how it is measured — more fundamentally than any shift since the 2008 eme…

13 entries in this section

OVERVIEW

A dedicated section. The 2023–2026 wave of LLMs, AI coding agents, and agent-mediated discovery has changed what Developer Relations does, who its audience is, and how it is measured — more fundamentally than any shift since the 2008 emergence of GitHub and Stack Overflow.

This section synthesises the perspectives, evidence, and early signals.

The central claim

Modern DevRel now serves two distinct audiences simultaneously:

  1. AI agents — Cursor, Claude Code, Copilot, ChatGPT, Perplexity, Gemini, and the growing class of background coding agents — that read your docs, sample code, and llms.txt to produce code on behalf of developers. They are the execution audience. They want exhaustive, structured, machine-parseable material.

  2. Human developers — who increasingly never read your docs first. They form opinions about your product on X, Bluesky, YouTube, Twitch, Reddit, Hacker News, Discord, podcasts, and conferences. They are the inspiration audience. They need authentic engineering brand, vivid demos, opinionated takes, and people they trust.

The same content rarely serves both well. The most successful 2025–2026 DevRel teams have begun to design for the split: agent-readable surfaces optimised for LLM consumption, and human-shaped surfaces optimised for emotional and aesthetic engagement. See ./the-dual-audience-thesis.md.

What changed between 2022 and 2026

Dimension20222026
First-touch discoveryGoogle search → docs”Ask ChatGPT” / “Ask Claude” / Perplexity → maybe-docs
Code authorshipDeveloper types codeAI agent writes code under developer supervision
Doc readerHumanHuman + AI agent (often AI first)
Documentation surfaceWeb pageWeb page + llms.txt + MCP server
Integration interfaceREST/GraphQL API + SDKSame + MCP server + agent-friendly tool schemas
Onboarding metricTTFHW for humanTTFHW for human + agent task success rate
Discovery channelGoogle + Twitter + HNGoogle + ChatGPT + YouTube + LinkedIn + Bluesky + Reddit + podcasts
Trust signalStars, follower counts, “recommended by”Cited by AI assistants, vouched for by trusted humans
Content type that travelsPolished blog postsLive streams, founder voices, opinionated takes, failures

Each of these shifts is treated in detail in subsequent files.

What this section covers

Foundations

Agent-facing surfaces

Optimisation and measurement

Identity and culture

The debates

The trains of thought, summarised

The discourse on AI-era DevRel splits into roughly five trains of thought. Each is developed in detail elsewhere in this section; the short versions:

  1. The Dual Audience. Most operationally pragmatic. Practitioners design separately for human inspiration and AI execution. Both audiences are now real; the strategies differ. Best articulated by people running working DevRel programs at AI-first companies (Anthropic, Mintlify, Vercel) and by the AngelHack and DocE-AI essays of 2025–2026.

  2. The Strategic-Reframe Camp. Argues DevRel is more important than ever, not less. “Plot twist: we’re not dead. We’re standing on the biggest stage of our careers” — Angie Jones, How DevRel is Leading AI Adoption (2025). DevRel becomes the modeller of authentic AI-augmented practice, the translator between developer perception and telemetry, and the curator of trust signals AI assistants cannot synthesise.

  3. The Post-Mortem Camp. Argues that the kind of DevRel that died in 2022–2024 was dying anyway and AI just accelerated the reckoning. “RIP DevRel 2010–2024” (MB Consulting, 2024). DevRel-as-content-marketing is the casualty; DevRel-as-strategic-function lives on, restructured.

  4. The Agent-Experience Camp. Sees the most important new DevRel discipline as designing for AI agents — your APIs, docs, llms.txt, MCP servers, prompt templates, error messages, and so on are now consumed primarily by software, not by people. Best operationalised inside AI-product companies (OpenAI, Anthropic) and at infrastructure companies that already ship MCP servers.

  5. The Sceptical / Empirical Camp. Cautions that much of what’s claimed about AI-era DevRel is unverified marketing. llms.txt may have negligible measurable impact; AI-citation studies show inconsistent results; some of the breathless 2024–2025 advice has not survived 2026 data. Best represented by analyses like Signals.sh’s “Does llms.txt actually work?” (2026) and the broader observability community’s caution that sentiment about AI productivity diverges from telemetry.

In practice, most mature DevRel teams in 2026 hold a position that draws from camps 1, 2, and 4 — adopting the dual-audience design, retaining the strategic-reframe argument, building agent-facing surfaces — while taking camp 5’s empirical caution seriously enough to instrument outcomes rather than trust the rhetoric.

What to read first

If you have time for only three of the files below:

  1. ./the-dual-audience-thesis.md
  2. ./agent-experience-ax.md
  3. ./early-signals-what-works.md

If you have time for one: the dual-audience thesis. It is the lens for everything else.

See also

A    ENTRIES IN THIS SECTION   ✣

  1. 01.

    Metrics in the AI Era

    DevRel metrics changed in 2024–2026. The old funnel — page view, signup, activation — is still there, but new measurement surfaces have appeared, and several pre-AI metrics now mean different things. This file is a working framework.

  2. 02.

    Documentation for Agents

    When your docs are consumed by AI agents as much as by humans, the craft of documentation changes. This file is a practical guide.

  3. 03.

    The Dual Audience Thesis

    The single organising idea of DevRel in 2026. Developer Relations now serves two distinct audiences simultaneously, and the content that works for one rarely works for the other.

  4. 04.

    LLM-Mediated Discovery

    For most of the 2010s, the developer's first encounter with a new product was a Google search followed by ten browser tabs. The 2020s, especially 2024 onward, broke that pattern. Developer research now starts with an AI assistant — ChatG…

  5. 05.

    Inspiration Stays Human

    The counterintuitive thesis: as AI agents take over more of the execution side of developer work, the inspiration side becomes more, not less, important. This file makes the argument.

  6. 06.

    The `llms.txt` Standard

    The single most-discussed agent-facing DevRel convention of 2024–2026. Proposed by Jeremy Howard (Answer.AI, fast.ai) on September 3, 2024, llms.txt is a markdown-based file placed at the root of a website (e.g. example.com/llms.txt) tha…

  7. 07.

    Model Context Protocol (MCP) as a DevRel Surface

    If llms.txt is the documentation surface for AI agents, MCP is the capabilities surface. Where llms.txt lets agents understand your product, MCP lets them operate it. For developer-product companies in 2026, publishing an MCP server has…

  8. 08.

    Agent Experience (AX)

    A discipline that did not exist as a vocabulary in 2022, was emerging in 2024, and by 2026 has its own job titles, design practices, and measurement frameworks. Agent Experience (AX) is the experience that AI agents have when interacting…

  9. 09.

    Generative Engine Optimisation (GEO) and Answer Engine Optimisation (AEO) for DevRel

    If SEO was the discipline of being found by search engines, GEO and AEO are the disciplines of being cited by AI assistants. The two terms overlap in practice but differ in nuance:

  10. 10.

    Vibe Coding and the AI Engineer

    Two adjacent identity shifts in the developer population that DevRel teams must understand: the rise of the "AI Engineer" as a recognised category since 2023, and the cultural emergence of "vibe coding" as a practice since early 2025. Bo…

  11. 11.

    Perspectives and Debates

    The DevRel-in-the-AI-era conversation has been substantive, contested, and dispersed across blog posts, podcasts, conference talks, and trade-press essays. This file collects the major named voices and the arguments they make, in a way t…

  12. 12.

    Critical Takes

    A fair AI-era DevRel reference must take the sceptical arguments seriously. Some of them are strong; some of them are weak. This file lays them out, plus the rebuttals where they apply.

  13. 13.

    Early Signals: What Works

    The most useful question to ask, after all the framing and debate: which DevRel approaches are demonstrably producing outcomes in 2025–2026?