1- # PloyLingua
1+ # PolyLingua
22
33A production-ready translation service built with ** OPEA (Open Platform for Enterprise AI)** components, featuring a modern Next.js UI and microservices architecture.
44
5- ## 🏗️ Architecture
6-
7- This service implements a ** 5-layer microservices architecture** :
8-
9- ```
10- ┌─────────────────────────────────────────────────────────────┐
11- │ Nginx Reverse Proxy │
12- │ (Port 80) │
13- └────────────────┬────────────────────────────────────────────┘
14- │
15- ┌────────┴─────────┐
16- │ │
17- ┌───────▼────────┐ ┌──────▼──────────────────┐
18- │ Next.js UI │ │ Translation Megaservice │
19- │ (Port 5173) │ │ (Port 8888) │
20- └────────────────┘ └──────┬──────────────────┘
21- │
22- ┌────────▼────────────┐
23- │ LLM Microservice │
24- │ (Port 9000) │
25- └────────┬────────────┘
26- │
27- ┌────────▼────────────┐
28- │ TGI Model Server │
29- │ (Port 8008) │
30- └─────────────────────┘
31- ```
32-
335### Components
346
35- 1 . ** TGI Service** - HuggingFace Text Generation Inference for model serving
7+ 1 . ** vLLM Service** - High-performance LLM inference engine for model serving
3682 . ** LLM Microservice** - OPEA wrapper providing standardized API
37- 3 . ** Translation Megaservice** - Orchestrator that formats prompts and routes requests
9+ 3 . ** PolyLingua Megaservice** - Orchestrator that formats prompts and routes requests
38104 . ** UI Service** - Next.js 14 frontend with React and TypeScript
39115 . ** Nginx** - Reverse proxy for unified access
4012
@@ -59,7 +31,7 @@ cd PolyLingua
5931
6032You'll be prompted for:
6133- ** HuggingFace API Token** - Get from https://huggingface.co/settings/tokens
62- - ** Model ID** - Default: ` haoranxu/ALMA-13B ` (translation-optimized model)
34+ - ** Model ID** - Default: ` swiss-ai/Apertus-8B-Instruct-2509 ` (translation-optimized model)
6335- ** Host IP** - Your server's IP address
6436- ** Ports and proxy settings**
6537
@@ -113,7 +85,7 @@ Key variables in `.env`:
11385| Variable | Description | Default |
11486| ----------| -------------| ---------|
11587| ` HF_TOKEN ` | HuggingFace API token | Required |
116- | ` LLM_MODEL_ID ` | Model to use for translation | ` haoranxu/ALMA-13B ` |
88+ | ` LLM_MODEL_ID ` | Model to use for translation | ` swiss-ai/Apertus-8B-Instruct-2509 ` |
11789| ` MODEL_CACHE ` | Directory for model storage | ` ./data ` |
11890| ` host_ip ` | Server IP address | ` localhost ` |
11991| ` NGINX_PORT ` | External port for web access | ` 80 ` |
@@ -133,24 +105,24 @@ The service works with any HuggingFace text generation model. Recommended models
133105### Project Structure
134106
135107```
136- opea-translation /
137- ├── translation .py # Backend translation service
138- ├── requirements.txt # Python dependencies
139- ├── Dockerfile # Backend container definition
140- ├── docker-compose.yaml # Multi-service orchestration
141- ├── set_env.sh # Environment setup script
142- ├── .env.example # Environment template
143- ├── ui/ # Next.js frontend
144- │ ├── app/ # Next.js app directory
145- │ ├── components/ # React components
146- │ ├── Dockerfile # UI container definition
147- │ └── package.json # Node dependencies
148- └── deploy/ # Deployment scripts
149- ├── nginx.conf # Nginx configuration
150- ├── build.sh # Image build script
151- ├── start.sh # Service startup script
152- ├── stop.sh # Service shutdown script
153- └── test.sh # API testing script
108+ PolyLingua /
109+ ├── polylingua .py # Backend polylingua service
110+ ├── requirements.txt # Python dependencies
111+ ├── Dockerfile # Backend container definition
112+ ├── docker-compose.yaml # Multi-service orchestration
113+ ├── set_env.sh # Environment setup script
114+ ├── .env.example # Environment template
115+ ├── ui/ # Next.js frontend
116+ │ ├── app/ # Next.js app directory
117+ │ ├── components/ # React components
118+ │ ├── Dockerfile # UI container definition
119+ │ └── package.json # Node dependencies
120+ └── deploy/ # Deployment scripts
121+ ├── nginx.conf # Nginx configuration
122+ ├── build.sh # Image build script
123+ ├── start.sh # Service startup script
124+ ├── stop.sh # Service shutdown script
125+ └── test.sh # API testing script
154126```
155127
156128### Running Locally (Development)
@@ -166,7 +138,7 @@ export LLM_SERVICE_PORT=9000
166138export MEGA_SERVICE_PORT=8888
167139
168140# Run service
169- python translation .py
141+ python polylingua .py
170142```
171143
172144** Frontend:**
@@ -194,7 +166,7 @@ Translate text between languages.
194166** Response:**
195167``` json
196168{
197- "model" : " translation " ,
169+ "model" : " polylingua " ,
198170 "choices" : [{
199171 "index" : 0 ,
200172 "message" : {
@@ -216,8 +188,8 @@ Translate text between languages.
216188docker compose logs -f
217189
218190# Specific service
219- docker compose logs -f translation -backend-server
220- docker compose logs -f translation -ui-server
191+ docker compose logs -f polylingua -backend-server
192+ docker compose logs -f polylingua -ui-server
221193```
222194
223195### Stop Services
@@ -253,7 +225,7 @@ docker compose down -v
253225
2542261 . Check if ports are available:
255227 ``` bash
256- sudo lsof -i :80,8888,9000,8008 ,5173
228+ sudo lsof -i :80,8888,9000,8028 ,5173
257229 ```
258230
2592312 . Verify environment variables:
@@ -276,9 +248,9 @@ docker compose down -v
276248
277249### Translation errors
278250
279- - Wait for TGI service to fully initialize (check logs)
251+ - Wait for vLLM service to fully initialize (check logs)
280252- Verify LLM service is healthy: ` curl http://localhost:9000/v1/health `
281- - Check TGI service: ` curl http://localhost:8008 /health `
253+ - Check vLLM service: ` curl http://localhost:8028 /health `
282254
283255### UI can't connect to backend
284256
@@ -293,7 +265,7 @@ docker compose down -v
293265- [ OPEA Project] ( https://github.com/opea-project )
294266- [ GenAIComps] ( https://github.com/opea-project/GenAIComps )
295267- [ GenAIExamples] ( https://github.com/opea-project/GenAIExamples )
296- - [ HuggingFace Text Generation Inference ] ( https://github.com/huggingface/text-generation-inference )
268+ - [ vLLM ] ( https://github.com/vllm-project/vllm )
297269
298270## 📧 Support
299271
0 commit comments