Demo project showcasing a multi-tenant, event-driven microservice architecture with FastAPI, Celery, RabbitMQ, and PostgreSQL. The stack models a typical SaaS scenario with tenant-scoped authentication, campaign creation, and asynchronous campaign execution handled by a Celery worker.
- User Service (
user-service): FastAPI API exposing user registration and JWT-based login scoped bytenant_id. - Campaign Service (
campaign-service): FastAPI API for creating and listing campaigns per tenant. Publishescampaign.createdevents to RabbitMQ via Celery. - Worker Service (
worker): Celery worker consumingcampaign.createdevents, simulates campaign execution, persists processing logs, supports retries, and leverages a dead-letter queue. - Infrastructure: PostgreSQL for persistence and RabbitMQ (with management UI) for messaging.
Each service emits structured JSON logs and exposes basic Prometheus metrics.
All services share a single PostgreSQL instance while scoping records by tenant_id. Tables are created automatically at startup via SQLAlchemy metadata.
| Column | Type | Constraints/Notes |
|---|---|---|
id |
SERIAL | Primary key, auto-increment |
tenant_id |
VARCHAR(64) | Required, indexed |
email |
VARCHAR(255) | Required, indexed |
hashed_password |
VARCHAR(255) | Required, bcrypt hash of the submitted password |
created_at |
TIMESTAMP | Defaults to NOW() |
Additional constraints: unique composite index on (tenant_id, email).
| Column | Type | Constraints/Notes |
|---|---|---|
id |
SERIAL | Primary key, auto-increment |
tenant_id |
VARCHAR(64) | Required, indexed |
name |
VARCHAR(255) | Required |
description |
TEXT | Optional |
created_at |
TIMESTAMP | Defaults to NOW() |
| Column | Type | Constraints/Notes |
|---|---|---|
id |
SERIAL | Primary key, auto-increment |
campaign_id |
INTEGER | Required, references originating campaign identifier |
tenant_id |
VARCHAR(64) | Required, indexed |
status |
VARCHAR(32) | Values: success, retry, or failed |
payload |
JSON | Raw event payload processed by the worker |
attempts |
INTEGER | Number of attempts recorded for this job |
last_error |
TEXT | Optional error details captured on failure |
processed_at |
TIMESTAMP | Defaults to NOW() when the worker logs the processing step |
docker-compose build
docker-compose upServices become available on:
- User Service API: http://localhost:8000/docs
- Campaign Service API: http://localhost:8001/docs
- RabbitMQ Management UI: http://localhost:15672 (guest/guest)
- Worker metrics endpoint: http://localhost:9000/metrics
- Postgres: localhost:5432 (postgres/postgres)
Tip: add
-dtodocker-compose upto run in detached mode.
-
Register a tenant user
curl -X POST http://localhost:8000/register \ -H "Content-Type: application/json" \ -d '{"tenant_id":"tenant-123","email":"[email protected]","password":"secret123"}'
-
Login to get a JWT
TOKEN=$(curl -s -X POST http://localhost:8000/login \ -H "Content-Type: application/json" \ -d '{"tenant_id":"tenant-123","email":"[email protected]","password":"secret123"}' | jq -r '.access_token')
-
Create a campaign (publishes
campaign.createdevent)curl -X POST http://localhost:8001/campaigns \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d '{"name":"Tenant launch","description":"Welcome campaign"}'
-
List campaigns for the tenant
curl -H "Authorization: Bearer $TOKEN" http://localhost:8001/campaigns -
Inspect worker logs
docker-compose logs -f worker
- Access
/metricson each FastAPI service for Prometheus-compatible metrics (e.g., http://localhost:8000/metrics). Worker metrics are exposed on port 9000 via the built-in Prometheus client server. - Logs are emitted in structured JSON for easier ingestion into log aggregation systems.
To observe retry and DLQ handling, you can temporarily stop the worker and create campaigns. When the worker restarts, Celery will process backlogged messages. Exceeding retry counts will route events to the campaign.deadletter queue, visible in the RabbitMQ management UI.
- Requirements for each service live alongside the service code under
*/requirements.txt. - Database tables are created automatically on service start using SQLAlchemy metadata.
- Adjust environment variables in
docker-compose.ymlto use stronger JWT secrets or separate databases per service.