Docker image and container
Overview
Build the API as an OCI image and run it with Docker for packaging checks and demos.
How we build the API image and run it locally. Production hardening is not covered here; see ADR 0015. Automated build and push to GitHub Container Registry after CI is in ADR 0021.
Optional tooling and daily development
Docker is optional for day-to-day work. Use a venv,
make run, and tests — see Local development. You do not
need an image for every change.
The usual loop (code → test → make verify → commit) stays the same. A container image is how many
teams ship the same bits that passed CI to a registry and then run them in a target environment.
Real deployments usually expect a container image: something builds the image,
pushes it to a registry (Docker Hub, GitHub Container Registry, a cloud vendor registry, etc.), and
the target environment pulls that image and runs it with production configuration and secrets.
The repo’s Dockerfile is the starting point for that image.
What a real deployment looks like (high level)
A full production setup is unique to each team, but the pattern is almost always:
-
Build and test in CI — pipeline runs lint, tests, and (often) builds the Docker image and tags it
(for example
:1.4.2or:<git-sha>). -
Push to a registry —
docker pushto a URL your servers can read (authentication required). Nothing is “sent” automatically fromgit pushunless you wire a workflow. - Roll out to an environment — your platform or ops process updates running services to the new image tag (for example on a VM with Docker, or a managed container service).
-
Configuration and data — production
APP_ENV, secrets, database URL, and scaling are set in that environment (not in the image). SQLite is fine for demos; multi-instance setups typically use a shared database (for example PostgreSQL). - Traffic — DNS points to a load balancer or reverse proxy; TLS terminates at the edge; requests reach your service on port 8000 (or behind a reverse proxy).
What you need installed
To build or run the image locally, install Docker Desktop or Docker Engine — Get Docker.
Docker in one minute
Docker builds an image (a filesystem snapshot + metadata) and runs it as a
container (an isolated process on your machine). The API image installs dependencies from the
same requirements.txt as make install, copies app/ and
alembic/, and starts the app via scripts/container_entrypoint.sh
(alembic upgrade head, then Uvicorn). On the host, make container-start runs the
same script (with your .venv); the image does not invoke make — there is no
.venv inside the container.
- Image name used in this repo:
study-app-api:local(tag is arbitrary). - Build:
make docker-buildordocker build -t study-app-api:local . -
Run (example): mount a volume for SQLite so data survives container removal:
Then opendocker run --rm -p 8000:8000 \ -e APP_ENV=dev \ -e SQLITE_DB_PATH=/data/study_app.db \ -e API_MOCK_API_KEY=local-dev-key \ -v study-app-data:/data \ study-app-api:localhttp://127.0.0.1:8000/live. Forqa/prodrules, non-default API keys and stricter CORS apply (app/core/config.py). Pass secrets via-eor your platform’s secret mechanism — not committed files.
SQLite and replicas
This API uses SQLite. A single container with a mounted data directory is fine for demos. Do not assume multiple replicas can share one SQLite file; for horizontal scaling you would move to a shared database (for example PostgreSQL) — a future architectural change.
See also
Page history
| Date | Change | Author |
|---|---|---|
| Added Page history section (repository baseline). | Ivan Boyarkin |