VRecommendSystem is a powerful recommendation engine with microservices architecture, supporting diverse machine learning algorithms and high scalability.
- API Server (Go/Fiber): Backend API gateway handling authentication and routing
- AI Server (Python/FastAPI): ML engine with support for collaborative filtering
- Frontend (React/TypeScript): Management and monitoring dashboard
- Redis: Caching layer
- Prometheus: Monitoring and metrics
# 1. Clone repository
git clone <repository-url>
cd VRecommendSystem
# 2. Copy and configure environment
cp .env.example .env
cp frontend/project/.env.example frontend/project/.env
# 3. Start all services
./docker-start.sh up
Access URLs:
- Frontend: http://localhost:5173
- API Server: http://localhost:2030
- AI Server: http://localhost:9999
- Prometheus: http://localhost:9090
For detailed Docker setup, see: DOCKER_SETUP.md
cd backend/api_server
cp example-env .env
# Configure .env as needed
go mod download
go run main.go
cd backend/ai_server
poetry install
poetry run server
cd frontend/project
npm install
npm run dev
All ports are centrally managed in the .env
file:
# API Server
API_SERVER_PORT=2030
# AI Server
AI_SERVER_PORT=9999
# Frontend
FRONTEND_PORT=5173
# Redis
REDIS_PORT=6379
# Prometheus
PROMETHEUS_PORT=9090
To change ports:
- Update the
.env
file - Restart services:
./docker-start.sh restart
- Authentication & Authorization
- Request routing
- Redis caching
- Proxy requests to AI Server
Health check: GET http://localhost:2030/api/v1/ping
- Model training & management
- Recommendation engine
- Data chefs (ETL pipelines)
- Scheduler for batch jobs
Health check: GET http://localhost:9999/api/v1/health
- Model management dashboard
- Task scheduler interface
- Logs viewer
- Metrics visualization
# Start services
./docker-start.sh up
# Stop services
./docker-start.sh down
# Rebuild images
./docker-start.sh build
# View logs
./docker-start.sh logs
# View logs for specific service
./docker-start.sh logs api_server
./docker-start.sh logs ai_server
# Check status
./docker-start.sh status
# Clean everything
./docker-start.sh clean
- Frontend: Vite HMR
- AI Server: Volume mount for
/src
- API Server: Rebuild required
# API Server
cd backend/api_server
go test ./...
# AI Server
cd backend/ai_server
poetry run pytest
Prometheus metrics available at: http://localhost:9090
AI Server Metrics:
- Model training time
- Task execution duration
- Active models count
- Scheduler status
STATUS_DEV
: dev/test/prodHOST_ADDRESS
: Bind addressHOST_PORT
: Port numberAI_SERVER_URL
: AI Server URLREDIS_HOST
,REDIS_PORT
: Redis config
HOST
: Bind addressPORT
: Port numberMYSQL_*
: MySQL configurationMONGODB_*
: MongoDB configuration
VITE_API_SERVER_URL
: API Server URLVITE_AI_SERVER_URL
: AI Server URLVITE_SUPABASE_*
: Supabase config
Bruno API collection is available in the vrecom_api/
directory for testing all endpoints.
Available Collections:
api_server/
: API Server endpoints (Authentication, Ping)ai_server/
: AI Server endpoints (Models, Tasks, Data Chefs, Scheduler, Metrics)
To use:
- Install Bruno
- Open the
vrecom_api
folder as a collection - Start testing endpoints
- Fork repository
- Create feature branch
- Commit changes
- Push to branch
- Create Pull Request
See LICENSE.txt