Dockerfile, a development docker-compose.yml, a production docker-compose-prod.yml, and a GitHub Actions workflow for pushing images to Amazon ECR.
Local development
Start the full stack (PostgreSQL + Flask) from the project root:- PostgreSQL 16 on
localhost:5432 - Flask (gunicorn) on
localhost:5000with 4 workers
init_db() creates all tables automatically. Test the default route:
Dockerfile
The generatedDockerfile uses python:3.11-slim-buster and runs Flask via gunicorn with 4 gthread workers:
The
ENV defaults in the Dockerfile are overridden at runtime by the environment variables in docker-compose.yml. Always configure sensitive values via .env, not the Dockerfile.Environment variables at runtime
docker-compose.yml injects all variables from your .env file into the container:
.env at the project root has all required values set before running docker-compose up.
Production deployment
1. Set CLEAR_DB to False
2. Push to Amazon ECR via GitHub Actions
The generated workflow at.github/workflows/build_and_push_to_ecr.yml triggers on every push to main and:
- Configures AWS credentials via OIDC (no long-lived keys)
- Logs in to ECR
- Builds the Docker image
- Tags and pushes it to your ECR repository
aws-actions/configure-aws-credentials@v4 with an IAM role assumed via GitHub’s OIDC provider. Set the role ARN in your workflow:
3. Deploy with the production compose file
Updatedocker-compose-prod.yml with your ECR image URL:
The production compose file does not include the
postgres service — use a managed database (e.g. AWS RDS) in production and point DB_URL at its hostname.