Local development (environment and run targets)
Overview
How to run the API locally, with optional metrics and optional log search.
Run the API with make run. Add Prometheus and Grafana via Docker if you want dashboards. For searchable
logs, you can start Elasticsearch, Kibana, and Filebeat (see below). Shortcuts and probe URLs: root
README.md. Full logging policy: ADR 0023.
First-time setup
make venvthenmake install(activate.venvif you prefer).- Create
.env:make env-init(or copyenv/example), then setAPP_ENVand secrets as needed. make migrateto apply Alembic migrations.- Optional:
make env-checkto verify config and database path.
Run targets
| Command | What it does |
|---|---|
make run |
Starts the FastAPI app with Uvicorn (reload). Reads .env. Use this when you only need the
API.
|
make run-project |
Runs make observability-up first (renders Prometheus config, starts
docker-compose.observability.yml: Blackbox, Prometheus, Grafana), then the same Uvicorn
command as
make run. Requires Docker. The API process stays in the foreground; stop it with Ctrl+C.
|
make observability-up / make observability-down |
Start or stop the observability stack alone. Use when you already run make run in one
terminal
and want metrics dashboards without restarting the API.
|
Default local ports
Override via .env where documented in env/example.
- API:
127.0.0.1:8000(fromAPP_HOST/APP_PORT) - Prometheus:
127.0.0.1:9090 - Grafana:
127.0.0.1:3001(container maps host 3001 → Grafana 3000) - Blackbox exporter:
127.0.0.1:9115
After you stop the API
Containers from make run-project or make observability-up keep running until you run
make observability-down (or docker compose -f docker-compose.observability.yml down).
Optional logging stack (Elasticsearch, Kibana, Filebeat)
For searchable JSON logs, set NDJSON output (LOG_FORMAT=json, or leave unset — default is
json; see env/example and app/core/config.py). Logs go under
LOG_DIR (default logs/).
Start the Elastic stack (Docker; reserve about 2 GiB RAM for Elasticsearch and Kibana):
make logging-up
Filebeat reads the mounted ./logs folder and sends lines to Elasticsearch.
Open Kibana at http://127.0.0.1:5601. Create a data view with index pattern
*study-app-logs* (wildcards on both sides). On Elasticsearch 8, backing indices look
like .ds-study-app-logs-…; a narrow pattern such as study-app-logs-* can miss them, so
Discover would show no rows.
Use Discover to filter (e.g. request_id) or paste a UUID in the search bar. Without
Kibana: make logging-es-query or make logging-es-query QUERY=<uuid> (same as
python scripts/check_es_request_id.py <uuid>). Smoke test: make logging-smoke. Stop:
make logging-down. Policy: ADR 0023.
Data path (why Discover can be empty)
-
API on the host writes lines to
./logs/*.log(e.g.app.log). Nothing is sent to Elastic until step 3. -
Filebeat (container) reads the same files via the bind mount
./logs→/var/log/study-appand publishes events to Elasticsearch. If the API is not running or writes elsewhere, Filebeat has nothing new to ship. -
Elasticsearch stores documents in indices / data streams whose names contain
study-app-logs(e.g..ds-study-app-logs-…). -
Kibana Discover needs a data view that matches those indices — use
*study-app-logs*. After you send traffic, set the time range to include “now” (e.g. Last 15 minutes).
Clean slate (reset logs + Elastic)
make logging-reset stops the stack, removes the Docker volume (indices and Kibana saved objects),
and deletes logs/*.log. Then make logging-up, recreate the data view in Kibana
(Stack Management → Data views → Create data view) with pattern
*study-app-logs* and timestamp @timestamp. Start the API
(make run), call /ready a few times, wait ~10–20 s, refresh Discover.
Local docs search smoke test (index + telemetry)
Checklist to verify docs search and telemetry on your machine.
-
Rebuild the search index:
python3 scripts/build_docs_search_index.py -
Serve docs over HTTP (not
file://):cd docs python3 -m http.server 8765 -
Start API locally (telemetry endpoint lives on API host):
make run -
Open
http://127.0.0.1:8765/index.html, type a few queries, and click at least one result. -
In browser Network, verify index fetch:
GET /assets/search-index.jsonreturns200. -
Verify telemetry writes in SQLite:
sqlite3 study_app.db "select event, count(*) from docs_search_events group by event order by event;" -
Verify KPI aggregation endpoint:
curl "http://127.0.0.1:8000/internal/telemetry/docs-search/metrics?window_minutes=60"
Why Network can show only OPTIONS
- Telemetry may be sent with
navigator.sendBeacon; in DevTools this is often shown as ping/other, not as fetch/XHR. - When fallback
fetchis used, browser sends CORS preflightOPTIONSbeforePOST. - If DevTools only shows
OPTIONS, trust the SQLite/API checks above to confirm events arrived.
Sanity checks
- API:
curl -s http://127.0.0.1:8000/live,/ready,/metrics(responses includeX-Request-Id) - Observability endpoints:
make observability-smoke - Logging stack (when running):
make logging-smoke
See also
- Make commands and workflows — all targets by theme, PlantUML pipeline diagrams, if-then scenarios
- Docker image and container
- ADR 0009 — health, readiness, observability
- ADR 0023 — structured logging and local Elasticsearch
- API load testing (tools.load_testing.runner)
- Runbook: observability scrape failing
- Developer guides index
Page history
| Date | Change | Author |
|---|---|---|
| Added Page history section (repository baseline). | Ivan Boyarkin |