Flask + SQLAlchemy API with Redis caching and asynchronous writes via RabbitMQ. Reads stay fast with cache; writes are queued and applied by a background worker to avoid DB spikes while keeping read-your-own-writes consistency.
- Python 3.11+
- Redis (e.g.
docker run -d --name redis -p 6379:6379 redis:7) - RabbitMQ / Amazon MQ (AMQP) reachable with the credentials in
.env(local docker-compose provided) - MySQL/RDS (optional; defaults to SQLite)
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env # fill in DB, Redis, RabbitMQ, admin secretsKey .env variables:
DATABASE_URLorDB_USER/DB_PASS/DB_HOST/DB_NAMEREDIS_URL(defaultredis://localhost:6379/0)ORDERS_CACHE_TTL(default 30),TEMP_ORDER_CACHE_TTL(default 60)ADMIN_PASSWORDRABBITMQ_HOST,RABBITMQ_PORT,RABBITMQ_USER,RABBITMQ_PASSWORD,RABBITMQ_QUEUE_NAME,MQ_USE_SSL
- Point
REDIS_URLto your ElastiCache endpoint, e.g.REDIS_URL=rediss://<primary-endpoint>:6379/0(useredissfor TLS). - Amazon MQ (AMQP):
RABBITMQ_HOST=<amazon-mq-broker-endpoint>RABBITMQ_PORT=5671for TLS/AMQPS (use 5672 if non-TLS)RABBITMQ_USER/RABBITMQ_PASSWORD= your MQ user credentialsMQ_USE_SSL=Truewhen using TLS (recommended)RABBITMQ_QUEUE_NAME=order_write_jobs(keep default unless you renamed it)
- When pointing to cloud Redis/MQ, do not start the local
docker compose up redis rabbitmq.
From back_end/:
# ensure Redis and RabbitMQ are up first
docker compose up -d redis rabbitmq # from repo root, optional helper
./start_services.sh- API: gunicorn
--workers 3 --threads 4 --bind 0.0.0.0:8080(override withAPI_WORKERS,API_THREADS) - Worker count default:
WORKER_COUNT=4 - PID/log files live next to the scripts (
server.pid,worker_*.pid,server.log,worker_*.log) - Access logs:
server-access.logcaptures each request (override viaACCESS_LOG_FILE,ACCESS_LOG_FORMAT) - Startup connectivity logs show Redis/RabbitMQ/DB reachability in
server.log(and worker logs) when the app boots
Stop everything:
./stop_services.shManual run (if you prefer):
# start Redis and RabbitMQ separately (e.g., docker or cloud broker)
python run.py # API (dev server)
python worker.py # queue consumerPOST /orders: validate -> write full payload to Redistemp_order:{id}(60s TTL) -> enqueue job to RabbitMQ -> return 202 +order_id. No direct DB write.- Worker (
worker.py): consumes queue; performs SQL INSERT/UPDATE/DELETE with simulated 0.3s lag; invalidates Redis list/detail caches and lets the temp entry expire naturally to preserve the read-your-own-writes window. GET /orders/<id>: checktemp_order:{id}first (read-your-own-writes); else cached detailorders:detail:{id}; else DB.PATCH /orders/<id>/statusandDELETE /orders/<id>: enqueue jobs; return 202.GET /orders: cache keyorders:list:{version}with version bumped on writes.
POST /admin/reset— wipes all orders/items (requiresADMIN_PASSWORD)POST /admin/seed— seeds fake orders ({ "count": 100 }+ admin password)
- Verify brokers:
redis-cli PING,rabbitmqctl list_queues(or AWS MQ console) - If cache is disabled: set
CACHE_DISABLED=true - Adjust temp cache TTL:
TEMP_ORDER_CACHE_TTL
Locust file: tests/locustfile.py
locust -f tests/locustfile.py --host=http://localhost:8080