A small but production-style background job processing system built with FastAPI, Redis, RQ, and Docker Compose.
It exposes a simple HTTP API where clients can enqueue jobs, check their status, and retrieve results—while the actual work is processed asynchronously by a background worker.
This project is a Dockerized Python microservice system designed to demonstrate real-world DevOps practices.
It includes an API built with FastAPI, a background worker for asynchronous processing, and Redis as the message broker.
The goal is to showcase containerization, service orchestration, logging, troubleshooting, and production‑ready architecture.
High-level architecture:
Client (Swagger UI / HTTP)
|
v
FastAPI container (bg-api)
|
v
Redis container (redis)
|
v
Worker container (bg-worker)
|
v
Job processed + result stored
Goal: Demonstrate how a DevOps engineer designs and runs a small, realistic microservice with:
- FastAPI as the HTTP API
- Redis as a message broker / job queue
- RQ worker for background processing
- Docker & Docker Compose for containerized orchestration
- AWS EC2 as the runtime environment
This project is intentionally small in scope but structured like a real-world service: separate components, clear responsibilities, and observable behavior through logs and HTTP endpoints.
This project is fully containerized using Docker and Docker Compose. All components (API, Redis, Worker) start together with a single command.
docker compose up --buildThis will:
- Build the API image
- Build the Worker image
- Start Redis
- Start all containers
- Attach logs from all services into one terminal
You should see output similar to:
Attaching to bg-api, bg-worker, redis
redis | Ready to accept connections
bg-api | Uvicorn running on http://0.0.0.0:8000
bg-worker | Listening on default...
Press CTRL + C, then run:
docker compose downOpen:
http://<your-ec2-ip>:8000/docsThis gives you the interactive Swagger UI where you can:
- Submit jobs
- Check job status
- Retrieve results
Send a POST request to:
curl -X POST "http://<your-ec2-ip>:8000/jobs" \
-H "Content-Type: application/json" \
-d '{"text": "hello world"}'You will receive a JSON response containing a job_id.
Use the job_id you received earlier:
curl "http://<your-ec2-ip>:8000/jobs/<job_id>"Possible statuses include:
-
queued — the worker has not picked it up yet
-
in_progress — the worker is processing it
-
completed — the result is ready
-
failed — something went wrong during processing
Once the job status is completed, fetch the result:
curl "http://<your-ec2-ip>:8000/jobs/<job_id>/result"If the job succeeded, you will receive the processed output. If it failed, you will receive an error message.
These variables control how your API and worker behave.
Create a .env file in the project root and add:
REDIS_HOST=redis
REDIS_PORT=6379
API_PORT=8000Make sure the .env file is in the same directory as your docker-compose.yml.
Your repository should look like this:
project/
├── api/
│ ├── main.py
│ ├── worker.py
│ └── requirements.txt
├── docker-compose.yml
├── Dockerfile.api
├── Dockerfile.worker
├── .env
└── README.md
This is how the services communicate inside Docker:
API ---> Redis ---> Worker
| ^
|_____________________|
The API pushes jobs to Redis, and the worker pulls and processes them.
To stop all running containers, press:
CTRL + C
Then remove the containers (but keep the images) with:
docker-compose downTo see the logs for all running services, use:
docker-compose logs -fTo view logs for a specific service (for example, the API), run:
docker-compose logs -f apiOnce the containers are running, you can test the API using:
curl http://localhost:8000/To send a job to the worker through the API, run:
curl -X POST http://localhost:8000/process1. Port already in use (API fails to start)
If you see an error about port 8000 being in use, find the process with:
lsof -i :8000Then stop it:
kill -9 <PID>2. Redis connection errors
Make sure the Redis container is running:
docker-compose psYou should see a container named redis with status “Up”.
3. Code changes not applying
If you modify Python files but the container doesn’t update:
docker-compose up --buildThis forces Docker to rebuild the images.
4. Worker not processing jobs
Check worker logs:
docker-compose logs -f workerIf it’s running but idle, the API may not be sending jobs correctly.
Backend
- Python (FastAPI)
- Redis (message broker)
- Worker service (background job processor)
Containerization
- Docker
- Docker Compose
Infrastructure & DevOps
- Linux environment
- CI/CD‑ready project structure
- Isolated multi‑service architecture
Networking
- Internal Docker networks
- Port mapping for API access
- Add Docker health checks for API, Redis, and Worker
- Implement retry logic and dead‑letter queues for failed jobs
- Add unit tests and integration tests for API and Worker
- Introduce environment‑specific Compose files (dev / prod)
- Add monitoring (Prometheus + Grafana) for container metrics
- Migrate to a message queue like RabbitMQ or AWS SQS for scaling
- Add CI/CD pipeline to automate builds and deployments
This project demonstrates a clean, production‑inspired microservice architecture using Docker and Python.
It includes an API service built with FastAPI, a background worker for asynchronous job processing, and Redis as the message broker.
All services run in isolated containers using Docker Compose, making the system easy to start, stop, and extend.
The project highlights practical DevOps skills such as containerization, service orchestration, logging, troubleshooting, and environment‑ready structure.
It serves as a strong foundation for scaling into more advanced distributed systems or integrating CI/CD pipelines.