Documentation Experience (DX) Assessment — Study App (formed 18 Apr 2026)

Overview

Test snapshot: same rubric as 14 Apr 2026, refreshed against the repo on 18 Apr 2026. Not listed in docs/audit/docs/README.html (intentional).

Compare our repo to common good practice, score it (Table 2), and track prioritized gaps (section 4.3).

Scope and methodology

Scope

Documentation experience (DX) here means using our docs end to end: nav, search, examples, API reference, runbooks, and trust signals (changelog, errors). Table 1 is generic; Table 2 is about this repo’s docs/ tree and automation.

Method

The review proceeds in four steps:

  1. Reference list — Table 1 describes good DX practices in neutral terms.
  2. Map to repo — Table 2 scores each practice (1–10) for this repo, with pointers (paths, ADRs, tools).
  3. Summarize — overall score and gaps in section 4 (narrative, weighted model, Top‑five gap status).
  4. Follow uppage history.

Scores are subjective guesses.

What was examined

What this assessment is not

Industry context, PET scope, and standards we do not chase (yet)

Big tech often runs central doc portals, search and analytics, translation workflows, and AI assistants with dedicated teams. We use a lighter Table 1 / Table 2 rubric in git per ADR 0024.

The docs are PET-scale: static HTML and generated API docs, not a paid developer platform. Many Table 1 rows (multi-SDK tabs, enterprise portal, huge search) are benchmarks, not requirements. Low scores often mean “not worth it yet” or “English-only by choice,” not failure. Priorities sit in section 4.3 (Top five gaps).

Table: Reference practices

Legend:

# Category Practice Reference description Typical benchmark
1 IA Diátaxis-style split Separate tutorials (learning-oriented), how-to guides (task-oriented), reference (information-oriented), and explanation (understanding-oriented). Users land in the right mode for their goal; navigation labels reflect it. Canonical mental model for technical docs; adopted widely in API and platform teams.
2 IA Progressive disclosure “Happy path” first; edge cases, limits, and failure modes linked or collapsed so beginners are not overwhelmed; experts can jump to reference or deep links. Reduces time-to-first-success (TTFS) without hiding rigor.
3 Journey Onboarding spine A single curated path: install → authenticate → first API call / first deploy → next steps. Each step has verification (“you should see…”) and links to troubleshooting. Stripe/Twilio-style quickstarts remain the pattern to beat for APIs.
4 Ops Docs-as-code Docs live in version control; changes are reviewed like code; publishing is automated. Prose and diagrams are diffable; ownership is explicit (CODEOWNERS or equivalent). Baseline for mature engineering orgs; see also ADR-driven governance.
5 Ops Single source of truth (SSOT) Generated sections (OpenAPI, CLI help, config tables, route lists) are produced from artifacts, not hand-copied. Manual text links to generated anchors explicitly. Eliminates the #1 DX failure: docs that contradict production.
6 Ops Versioning aligned with product Doc sets are versioned (or time-versioned) with the product; deprecation and sunset are visible in-docs and in machine-readable metadata where applicable. Essential once you have more than one supported major version.
7 Quality CI for docs Pipeline runs link checking, spelling/grammar linters (e.g. Vale with a house style), OpenAPI/MDX validators, and (for APIs) contract tests. Broken links fail the build or warn with SLAs. “Treat docs like code” in the strict sense — measurable gates.
8 Quality Tested code paths Copy-paste examples are executed in CI (or marked as pseudo-code). CLI snippets and curl examples target real endpoints; responses shown match schema or are clearly illustrative. Top teams refuse “example rot.”
9 Quality Error and failure documentation Errors are documented systematically: HTTP status, stable machine-readable codes where applicable, remediation, idempotency implications, and links to runbooks. Pairs with OpenAPI Problem Details / structured error bodies.
10 API docs OpenAPI (or equivalent) as contract Reference is generated from the spec; examples are first-class; auth schemes and security requirements are explicit; servers and environments are listed without ambiguity. Industry default for HTTP APIs; governance lint (Spectral/Redocly) is common.
11 API docs Interactive “try it” Sandboxed requests against test credentials, or clear warnings for production. Rate limits and scopes are visible before the user sends a request. Improves learning; must be safe-by-default.
12 SDK docs Multi-surface examples Same conceptual page offers language tabs (curl + official SDKs); idioms match each language; versioning notes per SDK where releases diverge. Expected at platform scale; smaller teams may start with one SDK + HTTP.
13 Discovery First-class search Full-text search with typo tolerance; results ranked by recency/relevance; scoped search within a doc set (e.g. “API reference only”). Offline/static options (Pagefind, Lunr) remain valid. Algolia DocSearch and similar patterns are table stakes for large sites.
14 Discovery AI-assisted answers (2024–2026) Optional assistant grounded in your docs (RAG), with citations to source pages; guardrails for “I don’t know”; no training on customer secrets; logging aligned with privacy policy. Rapidly becoming expected; quality depends on chunking and retrieval, not model size alone.
15 UX Performance (Core Web Vitals) Doc sites load fast on mobile: optimize LCP (hero text, fonts), avoid huge client bundles on content pages, lazy-load non-critical widgets. DX is part of perceived product quality.
16 UX Deep links and anchors Stable URLs for sections; headings get permalink anchors; “copy link” affordances. API objects link to changelog entries when behaviour changes. Supports support tickets and Slack threads.
17 A11y / i18n* WCAG-aligned UI Semantic HTML, keyboard navigation, focus order, color contrast, captions/transcripts for video, descriptive link text (not “click here”). WCAG 2.2 AA is a common corporate bar for customer-facing docs.
18 A11y / i18n* Localization strategy If multilingual: translation workflow, glossary, and “source of truth” language; RTL and string expansion considered. If English-only: stated explicitly to set expectations. Enterprise buyers often require localized docs.
19 Inclusivity Plain, inclusive language Style guide for inclusive terms; avoid unnecessary idioms; define acronyms on first use; consistent terminology (controlled vocabulary). Reduces misreads for non-native speakers and new hires.
20 Trust Changelog discipline Human-readable changelog or release notes linked from docs; breaking changes are called out with migration steps; dates and version tags are unambiguous. Pairs with API versioning and ADR records for rationale.
21 Trust Security hygiene in examples No real secrets in repos; placeholders and env var names are standard; “redacted” samples for tokens; guidance on rotating leaked keys. Prevents docs from becoming an attack surface.
22 Feedback Visible feedback channel “Edit this page” (for open repos), issue templates, or thumbs up/down with optional comment. Feedback routes to owners; SLAs are realistic. Closes the loop between readers and authors.
23 Feedback Privacy-respecting analytics Aggregate navigation and search success metrics; minimize PII; document cookies/consent if required; distinguish docs traffic from app traffic. GDPR/CCPA-conscious teams prefer first-party or aggregated analytics.
24 Support Runbooks and on-call alignment Operational docs (incident response, dashboards, rollback) are kept next to developer docs or clearly cross-linked; ownership matches pager rotations. Blurs “product docs” vs “internal ops” — both are DX for different audiences.
25 Platform Developer portal pattern Single entry: identity/keys, usage dashboards, quotas, docs, and support — coherent navigation and branding. For smaller products, a minimal portal is “good README + API reference + status page.” At scale, portals integrate billing and org management; don’t over-build early.

Table: AS-IS situation

One row per Table 1 practice (rows 1–25). Scores are 1–10 (our guess, not a formal audit). Colours come from docs/assets/docs.css (ADR 0024).

# Practice (Table 1) Study App evidence / notes Justification Score
1 Diátaxis-style split Content is split across ADRs, developer guides, runbooks, and generated API reference, but landing pages do not consistently label pieces as tutorial vs how-to vs reference vs explanation. Readers infer structure from folders and titles. Folders imply roles; explicit Diátaxis labels would raise confidence. 7
2 Progressive disclosure Long-form guides and Makefile-oriented docs exist; happy path vs edge case is mostly a matter of author style, not a uniform pattern of “basics first, deep links for limits.” Quality depends on author habit; no uniform “basics first” pattern. 7
3 Onboarding spine Developer guides cover local dev, workflows, and API topics; there is no single named page that chains install → authenticate → first successful request with explicit verification checkpoints end-to-end. Strong pieces exist; no single end-to-end golden path with checkpoints. 7
4 Docs-as-code ADR 0001; HTML and assets live in git; changes are reviewable like code; docs pipeline automates sync and generation steps. ADR 0001 + pipeline: reviewable, automated — reference-tier for PET. 9
5 Single source of truth (SSOT) OpenAPI baseline and governance reduce drift vs hand-maintained reference; pdoc reflects the Python API; UML and markers are generated or checked. Narrative docs still require manual care to stay aligned with env tables and behaviour. OpenAPI + generation limit hand drift; narrative still manual. 9
6 Versioning aligned with product CHANGELOG.md with a CI changelog gate; ADRs record decisions. Doc sets are not published as separate versioned subsites; alignment is via repo tags and changelog entries rather than per-major doc trees. Changelog + tags; not separate versioned doc subsites. 8
7 CI for docs make docs-check fails on drift between generated and committed artifacts and includes docs-feedback-check; OpenAPI checks run in make verify / verify-ci. GitHub Actions quality job runs make verify (regenerates docs via docs-fix) without the docs-check git-diff drift gate — drift is caught by contributor verify-ci / pre-commit. Spelling or Vale-style prose lint for all hand-written HTML is not universal. Strong local/PR gates; CI omits docs-check drift (by design). 8
8 Tested code paths Contract and API tests exercise the runtime; copy-paste examples in prose docs are not systematically executed as part of a docs-specific CI job. Some examples are illustrative only. Examples not executed as a dedicated docs CI job. 6
9 Error and failure documentation Error matrix and OpenAPI examples document stable codes; runbooks cover operational angles. Not a single Twilio-style public “error catalog” site, but the material exists across docs and spec. Errors covered across matrix, OpenAPI, runbooks — not one catalog site. 8
10 OpenAPI (or equivalent) as contract Governance ADRs, docs/openapi/openapi-baseline.json, embedded explorer patterns; reference is driven from the app’s OpenAPI rather than hand-written endpoint lists alone. Baseline + governance: contract is enforced in repo. 9
11 Interactive “try it” Swagger UI at /docs and static docs/openapi/openapi-explorer.html against the committed snapshot; users must supply their own keys and understand environment risk. Rate limits and auth are documented in OpenAPI and prose. Swagger UI + static explorer; env risk stays with the operator. 8
12 Multi-surface examples Primary examples are HTTP-oriented (curl, OpenAPI); there is no tabbed curl + multiple official SDKs on the same page. Scope matches a small service without generated client libraries. HTTP-only scope; no multi-SDK tabs — intentional at this scale. 5
13 First-class search docs-fix runs scripts/build_docs_search_index.py, which scans docs/**/*.html into docs/assets/search-index.json. docs-nav.js loads that index for global search (see docs/assets/docs.css “Global docs search”); pdoc output under docs/api/ also ships search.js. RFC 0001 / ADR 0027 document ranking and telemetry goals — not Algolia-class typo tolerance or hosted analytics out of the box. Full static HTML index + nav integration; tuning/metrics still maturing. 7
14 AI-assisted answers (2024–2026) No in-product docs assistant or RAG over this doc set is part of the repository. Optional future work; must be citation-grounded if introduced. No in-repo RAG assistant — acceptable until prioritized. 4
15 Performance (Core Web Vitals) Static HTML and shared CSS; no heavy SPA framework on doc pages. Performance depends on hosting and asset weight; not formally audited against Core Web Vitals in this assessment. Static HTML/CSS; CWV not measured in this assessment. 7
16 Deep links and anchors HTML pages use stable paths under docs/; shared nav and section anchors support linking to sections. Not every page exposes a visible “copy link to heading” control. Stable paths and anchors; copy-link affordance not everywhere. 7
17 WCAG-aligned UI Semantic HTML patterns and shared stylesheet; no claim of a full WCAG 2.2 AA audit of every template in this mapping. Semantic patterns; full WCAG 2.2 AA not claimed for every template. 6
18 Localization strategy Documentation is English-first; multilingual translation workflow and glossary are out of scope unless the product adds them explicitly. English-first; translation workflow out of scope unless product asks. 5
19 Plain, inclusive language Engineering and ADR tone is generally precise; a standalone inclusive-language style guide for all prose is not asserted here. Precise engineering tone; inclusive glossary not centralized. 7
20 Changelog discipline Keep a Changelog format; CI gate on main/master pushes and PRs; ties release communication to repo history. Changelog gate + format — strong trust signal. 9
21 Security hygiene in examples Env templates and docs emphasize configuration; examples use placeholders for keys. Contributors are guided not to commit secrets; aligns with normal OSS practice. Env templates and placeholder discipline match good practice. 8
22 Visible feedback channel GitHub issue template docs_feedback.md with label docs-feedback; make docs-feedback-check validates wiring; weekly cadence workflow opens/updates triage issues. In-page “was this helpful?” remains optional. Structured feedback + automation; not in-page ratings. 7
23 Privacy-respecting analytics Static hosting implies no first-party docs analytics unless the team adds it; no cookie banner or analytics pipeline is described as part of this project’s default docs delivery. No default first-party docs analytics pipeline. 5
24 Runbooks and on-call alignment Runbooks exist under docs/runbooks/; engineering practices and ADRs link operational and design context. On-call rotation is outside repo scope. Runbooks under docs/runbooks; on-call roster outside repo. 8
25 Developer portal pattern The repo delivers a documentation site (index, guides, API explorer, ADRs) rather than a full SaaS-style portal with billing, org dashboards, and integrated support. Appropriate for PET scale. PET-appropriate: docs site, not a billing portal. 6

Scoring summary

Narrative overall (Table 2 judgment)

Overall (our guess, PET-scale site): about 8.0 / 10 (test snapshot; Table 2 row edits: search evidence + feedback channels).
To move toward top decile: label content by Diátaxis type, one golden onboarding page, prose lint + link check in CI, optional citation-grounded AI, and close remaining gaps in section 4.3 (Top five gaps).

Weighted axis model (explicit arithmetic)

Optional cross-check: seven axes (findability, navigation & IA, readability & design, accessibility, maintainability, feedback & improvement, onboarding) scored 0–10, then combined with fixed weights that sum to 100%. This is not the same math as averaging Table 2 rows 1–25; it answers “how strong is the docs program vs reference?”

Axis Weight Score Contribution (weight × score)
Findability 0.18 8.0 1.44
Navigation & IA 0.18 8.0 1.44
Readability & design 0.14 7.5 1.05
Accessibility 0.12 7.5 0.90
Maintainability 0.22 9.0 1.98
Feedback & improvement 0.08 7.5 0.60
Onboarding 0.08 8.0 0.64
Total 1.00 8.05 → ≈ 8.0 / 10

Top five gaps — priority and workflow status

Status colours: TODO (not started), IN PROGRESS (owned work in flight), DONE (accepted in main). Update started / closed / PR when you pick up or finish work.

Priority Gap Status Started Closed PR / reference
P0 Hand-written docs vs generated docs/api/ (pdoc) — two different UX layers. TODO
P0 Internal sidebar navigation is manual (INTERNAL_SIDEBAR_NAV) — risk of orphan pages. TODO
P1 Search quality metrics when docs are viewed purely as static hosting (telemetry needs API). IN PROGRESS 2026-04-17 See RFC docs-search / ADR 0027
P1 Visible per-page feedback on high-traffic hubs (“was this helpful?” + issue link). TODO
P2 Canonical public docs URL documented for external readers (if GitHub Pages or other host is used). TODO

Page history

Date Change Author
Aligned heading and table with Page history standard (Date, Change, Author). Ivan Boyarkin
Test snapshot: copied rubric from 14 Apr assessment; refreshed evidence (Makefile/CI, search index, docs feedback automation). Filename 2026-04-18-documentation-experience-assessment.html; omitted from docs/audit/docs/README.html on purpose.

Page history

Date Change Author
Added Page history section (repository baseline). Ivan Boyarkin