REST API Assessment — Study App (formed 14 Apr 2026)
Overview
Compare the HTTP API to common good practice, score it (Table 2), and track prioritized gaps (section 4.3).
Scope and methodology
Scope
This review covers the Study App HTTP API in this repo: runtime behavior, OpenAPI contract, ops endpoints, security defaults, metrics, and CI around the public API. Audience: engineers and stakeholders asking how close we are to common “platform API” ideas for a small / PET-scale service (one app, not a global gateway).
User under /api/v1 is the main example; the same patterns (idempotency, errors)
apply elsewhere. Items we only describe as industry practice (e.g. ETag/If-Match) are
labeled when not in code today.
Tables match the repo on this assessment date. If behavior changes, update Table 2. The User API has create / read / update / patch by composite key and no list endpoint—pagination is “not applicable yet,” not “missing.”
Method
The review proceeds in four steps:
- Reference list — Table 1 describes good HTTP/API practices in neutral terms.
- Map to code — Table 2 scores each practice (1–10) for this repo, with evidence (paths, tools, ADRs).
- Summarize — overall score and gaps in section 4 (narrative, Table 2 mean, Top‑five gap status).
- Follow up — page history.
Scores are subjective guesses (PET / integration API lens), not a certification.
What was examined
-
Application entrypoint and middleware (
app/main.py): request logging, body size limits, authentication, rate limiting, security headers, request context. -
Correlation identifiers (
app/core/request_context.py,app/core/logging.py). -
Write idempotency (
app/core/idempotency.py,app/repositories/idempotency_repository.py, user routes). -
API contract and documentation pipeline: OpenAPI generation, custom builder, Swagger UI,
app/openapi/, baseline and governance scripts. -
Configuration and deployment hints (
app/core/config.py,env/, container image). -
Quality gates relevant to the API contract:
Makefiletargets,.github/workflows/ci.yml, OpenAPI baseline and contract tests.
What this assessment is not
- Not a pentest, compliance audit, or threat model.
- Not proof of production readiness or regulatory fit.
- Not a substitute for load tests, chaos tests, or research with real integrators.
Industry context, PET scope, and standards we do not chase (yet)
Big product orgs often use RFC reviews, launch checklists, security programs, and central API catalogs. We use dated HTML with Table 1 (reference) and Table 2 (scores) per ADR 0024.
Study App is a single-service, PET-scale deploy. Some Table 1 rows matter most at platform scale (distributed rate limits, SSO-only, full portals with billing). “Needs attention” often means not worth it yet or later, not “ignored.” Priorities sit in section 4.3 (Top five gaps).
Table: Reference practices
Legend:
- Category = theme
- Practice = short name
- Reference description = expected behavior (stack-neutral)
- Typical benchmark = who often exemplifies the practice
| # | Category | Practice | Reference description | Typical benchmark |
|---|---|---|---|---|
| 1 | Correlation | Request identifier |
Every HTTP request gets a stable id; often echoed as X-Request-Id; the same value appears
in
structured logs and (when present) in traces.
|
Minimum bar for support and incidents. |
| 2 | Write reliability | Idempotent POST |
Idempotency-Key header + operation scope + body hash; retries return the stored outcome;
same key with a different body yields a predictable 409 with a machine-readable code.
|
Stripe/Twilio-level expectation for payment and write APIs. |
| 3 | Concurrency | Optimistic locking (ETag / If-Match) |
GET returns an ETag; PUT/PATCH send
If-Match; if the resource changed, respond 412 Precondition Failed.
|
Avoids “last writer wins” without coordination. |
| 4 | Reads | Conditional GET (304) |
Client sends If-None-Match; if unchanged, respond 304 Not Modified with no
body.
|
Saves bandwidth for polling clients. |
| 5 | Collections | Pagination (cursor / keyset) |
For large lists, use a cursor or “after this key” rather than deep OFFSET; stable under
concurrent writes.
|
Public list APIs for large catalogs. |
| 6 | API evolution | URL versioning + compatibility policy |
Major version prefix (e.g. /api/v1); additive vs breaking rules; deprecation window.
|
Many clients on one backend. |
| 7 | Errors | Unified error contract |
Stable code/key; for 422, structured field list; optionally RFC 7807
(application/problem+json).
|
Clients branch on codes, not prose. |
| 8 | Access | Authentication and rate limiting |
Explicit scheme (API key, OAuth, etc.); on quota exceed, 429, Retry-After,
X-RateLimit-*; in clusters, shared counter storage.
|
Protection and fair consumption. |
| 9 | Security | Limits and headers | Request body size limits; CSP and related security headers; sensible CORS; no secrets in logs. | Baseline hygiene before WAF/mTLS at the edge. |
| 10 | Observability | Logs, metrics, probes |
Structured logs (JSON); metrics (e.g. Prometheus); /live and /ready; tracing
via
W3C/OpenTelemetry as services multiply.
|
SRE and 24/7 operations. |
| 11 | Tracing | traceparent / OpenTelemetry |
Ingress context ties logs and spans together; OTLP export to a tracing backend. | Standard for microservices and platforms. |
| 12 | Contract | OpenAPI as source of truth |
operationId, success and error examples, tags, securitySchemes, documented
headers (request id, rate limits).
|
SDK generation and one truth for teams. |
| 13 | CI | Anti-drift for the spec | Completeness checks (e.g. examples on writes and 422), semantic compare to a baseline, optional strict byte-for-byte mode. | As in this repo: openapi-check, contract-test. |
| 14 | DX (Documentation Experience) | Interactive help | Swagger UI or equivalent at runtime; optional static page with an OpenAPI snapshot for browsing without a server. | Fast manual API calls. |
| 15 | DX (Documentation Experience) | Developer portal | More than live docs: navigation, brand, onboarding journeys; OpenAPI is one artifact, not the whole UX. | Large API-first companies. |
| 16 | DX (Documentation Experience) | Reference vs guides | Machine reference docs separate from human guides (scenarios, best practices). | Stripe / Twilio style. |
| 17 | DX (Documentation Experience) | Premium documentation patterns | Layout that supports scanning (e.g. three columns), multi-language examples, search across guides and reference, CI-tested examples, personalized keys in snippets. | See Table 2 row 17 for how far we are from this bar. |
| 18 | Spec distribution | Stable URL and discovery |
Public openapi.json, sometimes .well-known;
latest/preview
variants in a repo.
|
Integrators and codegen without guessing URLs. |
| 19 | Specification | Overlays | Overlays on canonical OpenAPI for branding/navigation without forking the schema. | Large specs and multi-brand. |
| 20 | Governance | Extended lint (Spectral, etc.) | Style rules for naming, pagination, errors, security—beyond minimal baseline checks. | Internal style guides (AIP-like). |
| 21 | Events | AsyncAPI | Separate specification for webhooks/streams alongside REST. | Complex asynchronous contracts. |
| 22 | Errors | Error catalog | Dedicated pages or sections for error codes beyond inline OpenAPI descriptions. | Twilio-style error reference. |
| 23 | Lifecycle | Changelog and deprecations |
Keep a Changelog; HTTP Deprecation / Sunset when sunsetting; minimum support
window.
|
Trust from external integrators. |
| 24 | Platform | Single artifact pipeline | One specification feeds docs, contract tests, mocks, and (when needed) SDKs. | Mature API platforms. |
Table: AS-IS situation
One row per Table 1 practice (rows 1–24). Scores are 1–10 (our guess, not a
formal audit). Colours come from docs/assets/docs.css
(ADR
0024).
| # | Practice | What exists in the project (code) | Justification | Score |
|---|---|---|---|---|
| 1 | Request identifier |
Middleware normalizes/generates an id, X-Request-Id on the response, context in JSON logs
(app/core/request_context.py, app/core/logging.py).
|
Request id wired through logs and responses — strong fit. | 9 |
| 2 | Idempotent POST (writes) |
Idempotency-Key, idempotency_keys table, payload hash, 409 on
conflict; OpenAPI examples (app/core/idempotency.py, user routes).
|
Idempotency keys + storage + 409 — reference implementation for writes. | 9 |
| 3 | ETag / If-Match |
Not implemented on current routes (no optimistic-locking headers on
PUT/PATCH).
|
Optimistic concurrency not implemented — score reflects absence. | 3 |
| 4 | Conditional GET (304) |
Not implemented for GET /api/v1/user/…. |
Conditional GET not implemented for current reads. | 2 |
| 5 | Collection pagination | User API has no list endpoint; the practice becomes relevant if collection reads are added. | No list route yet; pagination N/A until collections exist. | 4 |
| 6 | URL version + policy |
Router under /api/v1; policy in ADR 0004; OpenAPI servers skew toward local
development.
|
/api/v1 + ADR policy; servers skew local — documented. | 9 |
| 7 | Unified error contract |
Stable code/key, custom envelope (not RFC 7807 by default), rich OpenAPI
examples;
error matrix in docs.
|
Stable codes + rich OpenAPI examples; custom envelope (not RFC 7807 by default) — strong for PET. | 8 |
| 8 | Authentication and rate limiting |
API key, in-memory limiter, 429 with Retry-After and
X-RateLimit-*;
quotas are not global across replicas without Redis/gateway.
|
API key + 429 + rate-limit headers; in-memory limits — good on one node, not distributed. | 7 |
| 9 | Limits and security headers |
Request body size limit, CORS from config, security headers middleware
(app/core/security.py).
|
Body size limits, CORS, security middleware — solid baseline hygiene. | 8 |
| 10 | Logs, metrics, probes |
NDJSON logs, Prometheus metrics, /live, /ready with DB check, optional
Elasticsearch/Filebeat for local search.
|
NDJSON, Prometheus, health checks — strong ops baseline; distributed tracing is row 11. | 9 |
| 11 | traceparent / OpenTelemetry |
trace_id/span_id fields reserved in logs; full OTel integration not present.
|
Trace fields reserved in logs; no full OTel export — partial vs reference. | 4 |
| 12 | Runtime OpenAPI contract |
Tags, operationId, examples, custom_openapi documenting
X-Request-Id per operation.
|
OpenAPI drives UI and tests — SSOT for HTTP contract. | 9 |
| 13 | CI against OpenAPI drift |
scripts/openapi_governance.py, baseline docs/openapi/openapi-baseline.json,
make openapi-check / contract-test, CI steps.
|
Baseline + governance + CI contract tests — strong anti-drift. | 9 |
| 14 | Interactive help |
/docs (Swagger UI) in the app; static docs/openapi/openapi-explorer.html over
a spec
snapshot.
|
Swagger UI + static explorer — solid interactive help at PET scale. | 8 |
| 15 | Developer portal | No separate marketing portal; HTML docs, ADRs, indexes—“docs as repo” level. | No marketing portal; repo HTML/ADRs — scope matches small product. | 5 |
| 16 | Reference vs guides |
Split: pdoc/OpenAPI vs docs/developer/*, engineering practices, runbooks—no single
three-column portal.
|
Reference split from guides without a single three-column portal — decent. | 7 |
| 17 | Premium patterns (columns, SDKs, search, tested snippets) | Partial: search in generated API docs, curl examples in operation descriptions; no multi-language SDKs from the spec, no personalized keys in snippets. | Partial “premium” patterns; no multi-SDK tabs or personalized snippets. | 4 |
| 18 | Stable public spec URL |
/openapi.json from the app; optional .well-known not standardized; baseline in
git
for partners/CI.
|
Runtime /openapi.json + baseline in git; no
standardized public discovery URL. |
7 |
| 19 | OpenAPI overlays | Not used. | Overlays not used — fair for a single-brand PET API. | 2 |
| 20 | Spectral / external spec lint | Custom governance script and pytest contract tests; does not replicate a full Spectral ruleset. | Custom governance + pytest contracts vs full Spectral ruleset — pragmatic trade-off. | 7 |
| 21 | AsyncAPI | None (REST-only surface). | No separate AsyncAPI artifact — sync-only surface for now. | 2 |
| 22 | Error catalog | Error matrix and OpenAPI examples; no standalone public “error center” like Twilio. | Errors spread across matrix/docs vs single catalog — PET trade-off. | 7 |
| 23 | Changelog and HTTP deprecations |
CHANGELOG.md with CI gate; Deprecation/Sunset response headers
not
implemented.
|
Lifecycle docs + HTTP deprecation story evolving with API. | 7 |
| 24 | Single artifact pipeline | OpenAPI generated from FastAPI; baseline in repo; tests and verify tied to contract; no SDK generation. | One pipeline from code to OpenAPI to tests — strong alignment. | 8 |
Scoring summary
Narrative overall (Table 2 judgment)
Overall (our guess, integration API): about 6.42 / 10.
To move toward top decile: OpenTelemetry / distributed tracing, shared or edge rate limits,
HTTP Deprecation / Sunset headers, stable public spec discovery when the product
needs it, optional ETag / If-Match for concurrent edits — see
section 4.3 (Top five gaps).
Weighted axis model (explicit arithmetic)
Optional cross-check: sum the numeric scores in Table 2 (rows 1–24), then divide by 24. This is not the same math as the narrative 7/10 above (which weights importance by practice); it answers “what is the unweighted mean of the rubric rows?”
| Step | Operation | Figures | Result |
|---|---|---|---|
| 1 | Sum of Table 2 scores | 9+9+3+2+4+9+8+7+8+9+4+9+9+8+5+7+4+7+2+7+2+7+7+8 | 154 |
| 2 | ÷ number of practices | 24 | — |
| Total | Explicit row mean (unweighted) | — | 154 ÷ 24 ≈ 6.42 / 10 |
Interpretation: several high scores (OpenAPI pipeline, idempotency, request ids) sit alongside deliberate “not built yet” rows (ETag, conditional GET, distributed limits), which pulls the unweighted mean below the qualitative ~7/10 headline.
Top five gaps — priority and workflow status
Status colours: TODO (not started), IN PROGRESS (owned work in flight), DONE (accepted in main). Update started / closed / PR when you pick up or finish work.
| Priority | Gap | Status | Started | Closed | PR / reference |
|---|---|---|---|---|---|
| P0 | OpenTelemetry / distributed tracing aligned with logs (Table 2 row 11). | TODO | — | — | — |
| P0 | Distributed or edge rate limiting when running >1 replica (row 8). | TODO | — | — | — |
| P1 | HTTP Deprecation / Sunset headers + changelog story (row 23). |
TODO | — | — | — |
| P1 | Stable public OpenAPI / spec discovery URL (e.g. .well-known) when product needs it (row
18). |
TODO | — | — | — |
| P2 | ETag / If-Match for safe concurrent edits if clients need it (row 3). |
TODO | — | — | — |
Page history
| Date | Change | Author |
|---|---|---|
| Aligned heading and table with Page history standard (Date, Change, Author). | Ivan Boyarkin | |
| Initial REST API assessment — HTML edition. | — |
Page history
| Date | Change | Author |
|---|---|---|
| Added Page history section (repository baseline). | Ivan Boyarkin |