Skip to content

Commit ef5934f

Browse files
committed
Feat: intial commit for polylingua
1 parent cb4a12a commit ef5934f

31 files changed

+2066
-0
lines changed

PolyLingua/Dockerfile

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
4+
FROM python:3.11-slim
5+
6+
WORKDIR /home/user
7+
8+
# Install system dependencies
9+
RUN apt-get update && apt-get install -y --no-install-recommends \
10+
curl \
11+
&& rm -rf /var/lib/apt/lists/*
12+
13+
# Copy requirements and install Python dependencies
14+
COPY requirements.txt .
15+
RUN pip install --no-cache-dir --upgrade pip && \
16+
pip install --no-cache-dir -r requirements.txt
17+
18+
# Copy translation service
19+
COPY translation.py .
20+
21+
# Expose service port
22+
EXPOSE 8888
23+
24+
# Run the translation service
25+
ENTRYPOINT ["python", "translation.py"]

PolyLingua/README.md

Lines changed: 307 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,307 @@
1+
# PloyLingua
2+
3+
A production-ready translation service built with **OPEA (Open Platform for Enterprise AI)** components, featuring a modern Next.js UI and microservices architecture.
4+
5+
## 🏗️ Architecture
6+
7+
This service implements a **5-layer microservices architecture**:
8+
9+
```
10+
┌─────────────────────────────────────────────────────────────┐
11+
│ Nginx Reverse Proxy │
12+
│ (Port 80) │
13+
└────────────────┬────────────────────────────────────────────┘
14+
15+
┌────────┴─────────┐
16+
│ │
17+
┌───────▼────────┐ ┌──────▼──────────────────┐
18+
│ Next.js UI │ │ Translation Megaservice │
19+
│ (Port 5173) │ │ (Port 8888) │
20+
└────────────────┘ └──────┬──────────────────┘
21+
22+
┌────────▼────────────┐
23+
│ LLM Microservice │
24+
│ (Port 9000) │
25+
└────────┬────────────┘
26+
27+
┌────────▼────────────┐
28+
│ TGI Model Server │
29+
│ (Port 8008) │
30+
└─────────────────────┘
31+
```
32+
33+
### Components
34+
35+
1. **TGI Service** - HuggingFace Text Generation Inference for model serving
36+
2. **LLM Microservice** - OPEA wrapper providing standardized API
37+
3. **Translation Megaservice** - Orchestrator that formats prompts and routes requests
38+
4. **UI Service** - Next.js 14 frontend with React and TypeScript
39+
5. **Nginx** - Reverse proxy for unified access
40+
41+
## 🚀 Quick Start
42+
43+
### Prerequisites
44+
45+
- Docker and Docker Compose
46+
- Git
47+
- HuggingFace Account (for model access)
48+
- 8GB+ RAM recommended
49+
- ~10GB disk space for models
50+
51+
### 1. Clone and Setup
52+
53+
```bash
54+
cd PolyLingua
55+
56+
# Configure environment variables
57+
./set_env.sh
58+
```
59+
60+
You'll be prompted for:
61+
- **HuggingFace API Token** - Get from https://huggingface.co/settings/tokens
62+
- **Model ID** - Default: `haoranxu/ALMA-13B` (translation-optimized model)
63+
- **Host IP** - Your server's IP address
64+
- **Ports and proxy settings**
65+
66+
### 2. Build Images
67+
68+
```bash
69+
./deploy/build.sh
70+
```
71+
72+
This builds:
73+
- Translation backend service
74+
- Next.js UI service
75+
76+
### 3. Start Services
77+
78+
```bash
79+
./deploy/start.sh
80+
```
81+
82+
Wait for services to initialize (~2-5 minutes for first run as models download).
83+
84+
### 4. Access the Application
85+
86+
- **Web UI**: http://localhost:80
87+
- **API Endpoint**: http://localhost:8888/v1/translation
88+
89+
### 5. Test the Service
90+
91+
```bash
92+
./deploy/test.sh
93+
```
94+
95+
Or test manually:
96+
97+
```bash
98+
curl -X POST http://localhost:8888/v1/translation \
99+
-H "Content-Type: application/json" \
100+
-d '{
101+
"language_from": "English",
102+
"language_to": "Spanish",
103+
"source_language": "Hello, how are you today?"
104+
}'
105+
```
106+
107+
## 📋 Configuration
108+
109+
### Environment Variables
110+
111+
Key variables in `.env`:
112+
113+
| Variable | Description | Default |
114+
|----------|-------------|---------|
115+
| `HF_TOKEN` | HuggingFace API token | Required |
116+
| `LLM_MODEL_ID` | Model to use for translation | `haoranxu/ALMA-13B` |
117+
| `MODEL_CACHE` | Directory for model storage | `./data` |
118+
| `host_ip` | Server IP address | `localhost` |
119+
| `NGINX_PORT` | External port for web access | `80` |
120+
121+
See `.env.example` for full configuration options.
122+
123+
### Supported Models
124+
125+
The service works with any HuggingFace text generation model. Recommended models:
126+
127+
- **swiss-ai/Apertus-8B-Instruct-2509** - Multilingual translation (default)
128+
- **haoranxu/ALMA-7B** - Specialized translation model
129+
130+
131+
## 🛠️ Development
132+
133+
### Project Structure
134+
135+
```
136+
opea-translation/
137+
├── translation.py # Backend translation service
138+
├── requirements.txt # Python dependencies
139+
├── Dockerfile # Backend container definition
140+
├── docker-compose.yaml # Multi-service orchestration
141+
├── set_env.sh # Environment setup script
142+
├── .env.example # Environment template
143+
├── ui/ # Next.js frontend
144+
│ ├── app/ # Next.js app directory
145+
│ ├── components/ # React components
146+
│ ├── Dockerfile # UI container definition
147+
│ └── package.json # Node dependencies
148+
└── deploy/ # Deployment scripts
149+
├── nginx.conf # Nginx configuration
150+
├── build.sh # Image build script
151+
├── start.sh # Service startup script
152+
├── stop.sh # Service shutdown script
153+
└── test.sh # API testing script
154+
```
155+
156+
### Running Locally (Development)
157+
158+
**Backend:**
159+
```bash
160+
# Install dependencies
161+
pip install -r requirements.txt
162+
163+
# Set environment variables
164+
export LLM_SERVICE_HOST_IP=localhost
165+
export LLM_SERVICE_PORT=9000
166+
export MEGA_SERVICE_PORT=8888
167+
168+
# Run service
169+
python translation.py
170+
```
171+
172+
**Frontend:**
173+
```bash
174+
cd ui
175+
npm install
176+
npm run dev
177+
```
178+
179+
### API Reference
180+
181+
#### POST /v1/translation
182+
183+
Translate text between languages.
184+
185+
**Request:**
186+
```json
187+
{
188+
"language_from": "English",
189+
"language_to": "Spanish",
190+
"source_language": "Your text to translate"
191+
}
192+
```
193+
194+
**Response:**
195+
```json
196+
{
197+
"model": "translation",
198+
"choices": [{
199+
"index": 0,
200+
"message": {
201+
"role": "assistant",
202+
"content": "Translated text here"
203+
},
204+
"finish_reason": "stop"
205+
}],
206+
"usage": {}
207+
}
208+
```
209+
210+
## 🔧 Operations
211+
212+
### View Logs
213+
214+
```bash
215+
# All services
216+
docker compose logs -f
217+
218+
# Specific service
219+
docker compose logs -f translation-backend-server
220+
docker compose logs -f translation-ui-server
221+
```
222+
223+
### Stop Services
224+
225+
```bash
226+
./deploy/stop.sh
227+
```
228+
229+
### Update Services
230+
231+
```bash
232+
# Rebuild images
233+
./deploy/build.sh
234+
235+
# Restart services
236+
docker compose down
237+
./deploy/start.sh
238+
```
239+
240+
### Clean Up
241+
242+
```bash
243+
# Stop and remove containers
244+
docker compose down
245+
246+
# Remove volumes (including model cache)
247+
docker compose down -v
248+
```
249+
250+
## 🐛 Troubleshooting
251+
252+
### Service won't start
253+
254+
1. Check if ports are available:
255+
```bash
256+
sudo lsof -i :80,8888,9000,8008,5173
257+
```
258+
259+
2. Verify environment variables:
260+
```bash
261+
cat .env
262+
```
263+
264+
3. Check service health:
265+
```bash
266+
docker compose ps
267+
docker compose logs
268+
```
269+
270+
### Model download fails
271+
272+
- Ensure `HF_TOKEN` is set correctly
273+
- Check internet connection
274+
- Verify model ID exists on HuggingFace
275+
- Check disk space in `MODEL_CACHE` directory
276+
277+
### Translation errors
278+
279+
- Wait for TGI service to fully initialize (check logs)
280+
- Verify LLM service is healthy: `curl http://localhost:9000/v1/health`
281+
- Check TGI service: `curl http://localhost:8008/health`
282+
283+
### UI can't connect to backend
284+
285+
- Verify `BACKEND_SERVICE_ENDPOINT` in `.env`
286+
- Check if backend is running: `docker compose ps`
287+
- Test API directly: `curl http://localhost:8888/v1/translation`
288+
289+
290+
291+
## 🔗 Resources
292+
293+
- [OPEA Project](https://github.com/opea-project)
294+
- [GenAIComps](https://github.com/opea-project/GenAIComps)
295+
- [GenAIExamples](https://github.com/opea-project/GenAIExamples)
296+
- [HuggingFace Text Generation Inference](https://github.com/huggingface/text-generation-inference)
297+
298+
## 📧 Support
299+
300+
For issues and questions:
301+
- Open an issue on GitHub
302+
- Check existing issues for solutions
303+
- Review OPEA documentation
304+
305+
---
306+
307+
**Built with OPEA - Open Platform for Enterprise AI** 🚀

PolyLingua/deploy/build.sh

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
#!/bin/bash
2+
# Copyright (C) 2024
3+
# SPDX-License-Identifier: Apache-2.0
4+
5+
set -e
6+
7+
echo "======================================"
8+
echo "Building OPEA Translation Service Images"
9+
echo "======================================"
10+
11+
# Source environment variables
12+
if [ -f .env ]; then
13+
echo "Loading environment from .env file..."
14+
export $(cat .env | grep -v '^#' | xargs)
15+
else
16+
echo "Warning: .env file not found. Using default values."
17+
echo "Run './set_env.sh' to configure environment variables."
18+
fi
19+
20+
# Build translation backend
21+
echo ""
22+
echo "Building translation backend service..."
23+
docker build --no-cache -t ${REGISTRY:-opea}/translation:${TAG:-latest} -f Dockerfile .
24+
25+
# Build translation UI
26+
echo ""
27+
echo "Building translation UI service..."
28+
docker build --no-cache \
29+
--build-arg BACKEND_SERVICE_ENDPOINT=${BACKEND_SERVICE_ENDPOINT} \
30+
-t ${REGISTRY:-opea}/translation-ui:${TAG:-latest} \
31+
-f ui/Dockerfile ./ui
32+
33+
echo ""
34+
echo "======================================"
35+
echo "Build completed successfully!"
36+
echo "======================================"
37+
echo ""
38+
echo "Images built:"
39+
echo " - ${REGISTRY:-opea}/translation:${TAG:-latest}"
40+
echo " - ${REGISTRY:-opea}/translation-ui:${TAG:-latest}"
41+
echo ""
42+
echo "To start the services, run:"
43+
echo " ./deploy/start.sh"
44+
echo ""

0 commit comments

Comments
 (0)