Local development (environment and run targets)

Overview

How to run the API locally, with optional metrics and optional log search.

Run the API with make run. Add Prometheus and Grafana via Docker if you want dashboards. For searchable logs, you can start Elasticsearch, Kibana, and Filebeat (see below). Shortcuts and probe URLs: root README.md. Full logging policy: ADR 0023.

First-time setup

  1. make venv then make install (activate .venv if you prefer).
  2. Create .env: make env-init (or copy env/example), then set APP_ENV and secrets as needed.
  3. make migrate to apply Alembic migrations.
  4. Optional: make env-check to verify config and database path.

Run targets

Command What it does
make run Starts the FastAPI app with Uvicorn (reload). Reads .env. Use this when you only need the API.
make run-project Runs make observability-up first (renders Prometheus config, starts docker-compose.observability.yml: Blackbox, Prometheus, Grafana), then the same Uvicorn command as make run. Requires Docker. The API process stays in the foreground; stop it with Ctrl+C.
make observability-up / make observability-down Start or stop the observability stack alone. Use when you already run make run in one terminal and want metrics dashboards without restarting the API.

Default local ports

Override via .env where documented in env/example.

After you stop the API

Containers from make run-project or make observability-up keep running until you run make observability-down (or docker compose -f docker-compose.observability.yml down).

Optional logging stack (Elasticsearch, Kibana, Filebeat)

For searchable JSON logs, set NDJSON output (LOG_FORMAT=json, or leave unset — default is json; see env/example and app/core/config.py). Logs go under LOG_DIR (default logs/).

Start the Elastic stack (Docker; reserve about 2 GiB RAM for Elasticsearch and Kibana):

make logging-up

Filebeat reads the mounted ./logs folder and sends lines to Elasticsearch.

Open Kibana at http://127.0.0.1:5601. Create a data view with index pattern *study-app-logs* (wildcards on both sides). On Elasticsearch 8, backing indices look like .ds-study-app-logs-…; a narrow pattern such as study-app-logs-* can miss them, so Discover would show no rows.

Use Discover to filter (e.g. request_id) or paste a UUID in the search bar. Without Kibana: make logging-es-query or make logging-es-query QUERY=<uuid> (same as python scripts/check_es_request_id.py <uuid>). Smoke test: make logging-smoke. Stop: make logging-down. Policy: ADR 0023.

Data path (why Discover can be empty)

  1. API on the host writes lines to ./logs/*.log (e.g. app.log). Nothing is sent to Elastic until step 3.
  2. Filebeat (container) reads the same files via the bind mount ./logs/var/log/study-app and publishes events to Elasticsearch. If the API is not running or writes elsewhere, Filebeat has nothing new to ship.
  3. Elasticsearch stores documents in indices / data streams whose names contain study-app-logs (e.g. .ds-study-app-logs-…).
  4. Kibana Discover needs a data view that matches those indices — use *study-app-logs*. After you send traffic, set the time range to include “now” (e.g. Last 15 minutes).

Clean slate (reset logs + Elastic)

make logging-reset stops the stack, removes the Docker volume (indices and Kibana saved objects), and deletes logs/*.log. Then make logging-up, recreate the data view in Kibana (Stack Management → Data views → Create data view) with pattern *study-app-logs* and timestamp @timestamp. Start the API (make run), call /ready a few times, wait ~10–20 s, refresh Discover.

Local docs search smoke test (index + telemetry)

Checklist to verify docs search and telemetry on your machine.

  1. Rebuild the search index:
    python3 scripts/build_docs_search_index.py
  2. Serve docs over HTTP (not file://):
    cd docs
                  python3 -m http.server 8765
  3. Start API locally (telemetry endpoint lives on API host):
    make run
  4. Open http://127.0.0.1:8765/index.html, type a few queries, and click at least one result.
  5. In browser Network, verify index fetch: GET /assets/search-index.json returns 200.
  6. Verify telemetry writes in SQLite:
    sqlite3 study_app.db "select event, count(*) from docs_search_events group by event order by event;"
  7. Verify KPI aggregation endpoint:
    curl "http://127.0.0.1:8000/internal/telemetry/docs-search/metrics?window_minutes=60"

Why Network can show only OPTIONS

Sanity checks

See also

Page history

Date Change Author
Added Page history section (repository baseline). Ivan Boyarkin