A powerful proxy that can unify the requests of various client-only large model APIs (Gemini CLI, Antigravity, Qwen Code, Kiro ...), simulate requests, and encapsulate them into a local OpenAI-compatible interface.
AIClient2API is an API proxy service that breaks through client limitations, converting free large models originally restricted to client use only (such as Gemini, Antigravity, Qwen Code, Kiro) into standard OpenAI-compatible interfaces that can be called by any application. Built on Node.js, it supports intelligent conversion between OpenAI, Claude, and Gemini protocols, enabling tools like Cherry-Studio, NextChat, and Cline to freely use advanced models such as Claude Opus 4.5, Gemini 3.0 Pro, and Qwen3 Coder Plus at scale. The project adopts a modular architecture based on strategy and adapter patterns, with built-in account pool management, intelligent polling, automatic failover, and health check mechanisms, ensuring 99.9% service availability.
Note
๐ Important Milestone
- Thanks to Ruan Yifeng for the recommendation in Weekly Issue 359
๐ Version Update Log
Click to expand detailed version history
- 2026.01.07 - Added iFlow protocol support, enabling access to Qwen, Kimi, DeepSeek, and GLM series models via OAuth authentication with automatic token refresh
- 2026.01.03 - Added theme switching functionality and optimized provider pool initialization, removed the fallback strategy of using provider default configuration
- 2025.12.30 - Added main process management and automatic update functionality
- 2025.12.25 - Unified configuration management: All configs centralized to
configs/directory. Docker users need to update mount path to-v "local_path:/app/configs" - 2025.12.11 - Automatically built Docker images are now available on Docker Hub: justlikemaki/aiclient-2-api
- 2025.11.30 - Added Antigravity protocol support, enabling access to Gemini 3 Pro, Claude Sonnet 4.5, and other models via Google internal interfaces
- 2025.11.16 - Added Ollama protocol support, unified interface to access all supported models (Claude, Gemini, Qwen, OpenAI, etc.)
- 2025.11.11 - Added Web UI management console, supporting real-time configuration management and health status monitoring
- 2025.11.06 - Added support for Gemini 3 Preview, enhanced model compatibility and performance optimization
- 2025.10.18 - Kiro open registration, new accounts get 500 credits, full support for Claude Sonnet 4.5
- 2025.09.01 - Integrated Qwen Code CLI, added
qwen3-coder-plusmodel support - 2025.08.29 - Released account pool management feature, supporting multi-account polling, intelligent failover, and automatic degradation strategies
- Configuration: Add
PROVIDER_POOLS_FILE_PATHparameter inconfigs/config.json - Reference configuration: provider_pools.json
- Configuration: Add
- History Developed
- Support Gemini CLI, Kiro and other client2API
- OpenAI, Claude, Gemini three-protocol mutual conversion, automatic intelligent switching
- Multi-Model Unified Interface: Through standard OpenAI-compatible protocol, configure once to access mainstream large models including Gemini, Claude, Qwen Code, Kimi K2, MiniMax M2
- Flexible Switching Mechanism: Path routing, support dynamic model switching via startup parameters or environment variables to meet different scenario requirements
- Zero-Cost Migration: Fully compatible with OpenAI API specifications, tools like Cherry-Studio, NextChat, Cline can be used without modification
- Multi-Protocol Intelligent Conversion: Support intelligent conversion between OpenAI, Claude, and Gemini protocols for cross-protocol model invocation
- Bypass Official Restrictions: Utilize OAuth authorization mechanism to effectively break through rate and quota limits of services like Gemini, Antigravity
- Free Advanced Models: Use Claude Opus 4.5 for free via Kiro API mode, use Qwen3 Coder Plus via Qwen OAuth mode, reducing usage costs
- Intelligent Account Pool Scheduling: Support multi-account polling, automatic failover, and configuration degradation, ensuring 99.9% service availability
- Full-Chain Log Recording: Capture all request and response data, supporting auditing and debugging
- Private Dataset Construction: Quickly build proprietary training datasets based on log data
- System Prompt Management: Support override and append modes, achieving perfect combination of unified base instructions and personalized extensions
- Web UI Management Console: Real-time configuration management, health status monitoring, API testing and log viewing
- Modular Architecture: Based on strategy and adapter patterns, adding new model providers requires only 3 steps
- Complete Test Coverage: Integration and unit test coverage 90%+, ensuring code quality
- Containerized Deployment: Provides Docker support, one-click deployment, cross-platform operation
- ๐ก Core Advantages
- ๐ Quick Start
- ๐ Authorization Configuration Guide
- ๐ Authorization File Storage Paths
- ๐ฆ Ollama Protocol Usage Examples
- โ๏ธ Advanced Configuration
- โ FAQ
- ๐ Open Source License
- ๐ Acknowledgements
โ ๏ธ Disclaimer
The most recommended way to use AIClient-2-API is to start it through an automated script and configure it visually directly in the Web UI console.
docker run -d -p 3000:3000 -p 8085-8087:8085-8087 -p 19876-19880:19876-19880 --restart=always -v "your_path:/app/configs" --name aiclient2api justlikemaki/aiclient-2-apiParameter Description:
-d: Run container in background-p 3000:3000 ...: Port mapping. 3000 is for Web UI, others are for OAuth callbacks (Gemini: 8085, Antigravity: 8086, Kiro: 19876-19880)--restart=always: Container auto-restart policy-v "your_path:/app/configs": Mount configuration directory (replace "your_path" with actual path, e.g.,/home/user/aiclient-configs)--name aiclient2api: Container name
You can also use Docker Compose for deployment. First, navigate to the docker directory:
cd docker
mkdir -p configs
docker compose up -dTo build from source instead of using the pre-built image, edit docker-compose.yml:
- Comment out the
image: justlikemaki/aiclient-2-api:latestline - Uncomment the
build:section - Run
docker compose up -d --build
- Linux/macOS:
chmod +x install-and-run.sh && ./install-and-run.sh - Windows: Double-click
install-and-run.bat
After the server starts, open your browser and visit: ๐ http://localhost:3000
Default Password:
admin123(can be changed in the console or by modifying thepwdfile after login)
Go to the "Configuration" page, you can:
- โ Fill in the API Key for each provider or upload OAuth credential files
- โ Switch default model providers in real-time
- โ Monitor health status and real-time request logs
========================================
AI Client 2 API Quick Install Script
========================================
[Check] Checking if Node.js is installed...
โ
Node.js is installed, version: v20.10.0
โ
Found package.json file
โ
node_modules directory already exists
โ
Project file check completed
========================================
Starting AI Client 2 API Server...
========================================
๐ Server will start on http://localhost:3000
๐ Visit http://localhost:3000 to view management interface
โน๏ธ Press Ctrl+C to stop server
๐ก Tip: The script will automatically install dependencies and start the server. If you encounter any issues, the script provides clear error messages and suggested solutions.
A functional Web management interface, including:
๐ Dashboard: System overview, interactive routing examples, client configuration guide
โ๏ธ Configuration: Real-time parameter modification, supporting all providers (Gemini, Antigravity, OpenAI, Claude, Kiro, Qwen), including advanced settings and file uploads
๐ Provider Pools: Monitor active connections, provider health statistics, enable/disable management
๐ Config Files: Centralized OAuth credential management, supporting search filtering and file operations
๐ Real-time Logs: Real-time display of system and request logs, with management controls
๐ Login Verification: Default password admin123, can be modified via pwd file
Access: http://localhost:3000 โ Login โ Sidebar navigation โ Take effect immediately
Supports various input types such as images and documents, providing you with a richer interaction experience and more powerful application scenarios.
Seamlessly support the following latest large models, just configure the corresponding endpoint in Web UI or configs/config.json:
- Claude 4.5 Opus - Anthropic's strongest model ever, now supported via Kiro, Antigravity
- Gemini 3 Pro - Google's next-generation architecture preview, now supported via Gemini, Antigravity
- Qwen3 Coder Plus - Alibaba Tongyi Qianwen's latest code-specific model, now supported via Qwen Code
- Kimi K2 / MiniMax M2 - Synchronized support for top domestic flagship models, now supported via custom OpenAI, Claude
Click to expand detailed authorization configuration steps for each provider
๐ก Tip: For the best experience, it is recommended to manage authorization visually through the Web UI console.
In the Web UI management interface, you can complete authorization configuration rapidly:
- Generate Authorization: On the "Provider Pools" page or "Configuration" page, click the "Generate Authorization" button in the upper right corner of the corresponding provider (e.g., Gemini, Qwen).
- Scan/Login: An authorization dialog will pop up, you can click "Open in Browser" for login verification. For Qwen, just complete the web login; for Gemini and Antigravity, complete the Google account authorization.
- Auto-Save: After successful authorization, the system will automatically obtain credentials and save them to the corresponding directory in
configs/. You can see the newly generated credentials on the "Config Files" page. - Visual Management: You can upload or delete credentials at any time in the Web UI, or use the "Quick Associate" function to bind existing credential files to providers with one click.
- Obtain OAuth Credentials: Visit Google Cloud Console to create a project and enable Gemini API
- Project Configuration: You may need to provide a valid Google Cloud project ID, which can be specified via the startup parameter
--project-id - Ensure Project ID: When configuring in the Web UI, ensure the project ID entered matches the project ID displayed in the Google Cloud Console and Gemini CLI.
- Personal Account: Personal accounts require separate authorization, application channels have been closed.
- Pro Member: Antigravity is temporarily open to Pro members, you need to purchase a Pro membership first.
- Organization Account: Organization accounts require separate authorization, contact the administrator to obtain authorization.
- First Authorization: After configuring the Qwen service, the system will automatically open the authorization page in the browser
- Recommended Parameters: Use official default parameters for best results
{ "temperature": 0, "top_p": 1 }
- Environment Preparation: Download and install Kiro client
- Complete Authorization: Log in to your account in the client to generate
kiro-auth-token.jsoncredential file - Best Practice: Recommended to use with Claude Code for optimal experience
- Important Notice: Kiro service usage policy has been updated, please visit the official website for the latest usage restrictions and terms
- Create Pool Configuration File: Create a configuration file referencing provider_pools.json.example
- Configure Pool Parameters: Set
PROVIDER_POOLS_FILE_PATHinconfigs/config.jsonto point to the pool configuration file - Startup Parameter Configuration: Use the
--provider-pools-file <path>parameter to specify the pool configuration file path - Health Check: The system will automatically perform periodic health checks and avoid using unhealthy providers
Click to expand default storage locations for authorization credentials
Default storage locations for authorization credential files of each service:
| Service | Default Path | Description |
|---|---|---|
| Gemini | ~/.gemini/oauth_creds.json |
OAuth authentication credentials |
| Kiro | ~/.aws/sso/cache/kiro-auth-token.json |
Kiro authentication token |
| Qwen | ~/.qwen/oauth_creds.json |
Qwen OAuth credentials |
| Antigravity | ~/.antigravity/oauth_creds.json |
Antigravity OAuth credentials (supports Claude 4.5 Opus) |
Note:
~represents the user home directory (Windows:C:\Users\username, Linux/macOS:/home/usernameor/Users/username)
Custom Path: Can specify custom storage location via relevant parameters in configuration file or environment variables
This project supports the Ollama protocol, allowing access to all supported models through a unified interface. The Ollama endpoint provides standard interfaces such as /api/tags, /api/chat, /api/generate, etc.
Ollama API Call Examples:
- List all available models:
curl http://localhost:3000/ollama/api/tags \
-H "Authorization: Bearer your-api-key"- Chat interface:
curl http://localhost:3000/ollama/api/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "[Claude] claude-sonnet-4.5",
"messages": [
{"role": "user", "content": "Hello"}
]
}'- Specify provider using model prefix:
[Kiro]- Access Claude models using Kiro API[Claude]- Use official Claude API[Gemini CLI]- Access via Gemini CLI OAuth[OpenAI]- Use official OpenAI API[Qwen CLI]- Access via Qwen OAuth
Click to expand proxy configuration, model filtering, and Fallback advanced settings
This project supports flexible proxy configuration, allowing you to configure a unified proxy for different providers or use provider-specific proxied endpoints.
Configuration Methods:
- Web UI Configuration (Recommended): Convenient configuration management
In the "Configuration" page of the Web UI, you can visually configure all proxy options:
- Unified Proxy: Fill in the proxy address in the "Proxy Settings" area and check the providers that need to use the proxy
- Provider Endpoints: In each provider's configuration area, directly modify the Base URL to a proxied endpoint
- Click "Save Configuration": Takes effect immediately without restarting the service
-
Unified Proxy Configuration: Configure a global proxy and specify which providers use it
- Web UI Configuration: Fill in the proxy address in the "Proxy Settings" area of the "Configuration" page and check the providers that need to use the proxy
- Configuration File: Configure in
configs/config.json
{ "PROXY_URL": "http://127.0.0.1:7890", "PROXY_ENABLED_PROVIDERS": [ "gemini-cli-oauth", "gemini-antigravity", "claude-kiro-oauth" ] } -
Provider-Specific Proxied Endpoints: Some providers (like OpenAI, Claude) support configuring proxied API endpoints
- Web UI Configuration: In each provider's configuration area on the "Configuration" page, modify the corresponding Base URL
- Configuration File: Configure in
configs/config.json
{ "OPENAI_BASE_URL": "https://your-proxy-endpoint.com/v1", "CLAUDE_BASE_URL": "https://your-proxy-endpoint.com" }
Supported Proxy Types:
- HTTP Proxy:
http://127.0.0.1:7890 - HTTPS Proxy:
https://127.0.0.1:7890 - SOCKS5 Proxy:
socks5://127.0.0.1:1080
Use Cases:
- Network-Restricted Environments: Use in network environments where Google, OpenAI, and other services cannot be accessed directly
- Hybrid Configuration: Some providers use unified proxy, others use their own proxied endpoints
- Flexible Switching: Enable/disable proxy for specific providers at any time in the Web UI
Notes:
- Proxy configuration priority: Unified proxy configuration > Provider-specific endpoints > Direct connection
- Ensure the proxy service is stable and available, otherwise it may affect service quality
- SOCKS5 proxy usually performs better than HTTP proxy
Support excluding unsupported models through notSupportedModels configuration, the system will automatically skip these providers.
Configuration: Add notSupportedModels field for providers in configs/provider_pools.json:
{
"gemini-cli-oauth": [
{
"uuid": "provider-1",
"notSupportedModels": ["gemini-3.0-pro", "gemini-3.5-flash"],
"checkHealth": true
}
]
}How It Works:
- When requesting a specific model, the system automatically filters out providers that have configured the model as unsupported
- Only providers that support the model will be selected to handle the request
Use Cases:
- Some accounts cannot access specific models due to quota or permission restrictions
- Need to assign different model access permissions to different accounts
When all accounts under a Provider Type (e.g., gemini-cli-oauth) are exhausted due to 429 quota limits or marked as unhealthy, the system can automatically fallback to another compatible Provider Type (e.g., gemini-antigravity) instead of returning an error directly.
Configuration: Add providerFallbackChain configuration in configs/config.json:
{
"providerFallbackChain": {
"gemini-cli-oauth": ["gemini-antigravity"],
"gemini-antigravity": ["gemini-cli-oauth"],
"claude-kiro-oauth": ["claude-custom"],
"claude-custom": ["claude-kiro-oauth"]
}
}How It Works:
- Try to select a healthy account from the primary Provider Type pool
- If all accounts in that type are unhealthy or return 429:
- Look up the configured fallback types
- Check if the fallback type supports the requested model (protocol compatibility check)
- Select a healthy account from the fallback type's pool
- Supports multi-level degradation chains:
gemini-cli-oauth โ gemini-antigravity โ openai-custom - Only returns an error if all fallback types are also unavailable
Use Cases:
- In batch task scenarios, the free RPD quota of a single Provider Type can be easily exhausted in a short time
- Through cross-type Fallback, you can fully utilize the independent quotas of multiple Providers, improving overall availability and throughput
Notes:
- Fallback only occurs between protocol-compatible types (e.g., between
gemini-*, betweenclaude-*) - The system automatically checks if the target Provider Type supports the requested model
Click to expand FAQ and solutions (port occupation, Docker startup, 429 errors, etc.)
Problem Description: After clicking "Generate Authorization", the browser opens the authorization page but authorization fails or cannot be completed.
Solutions:
- Check Network Connection: Ensure you can access Google, Alibaba Cloud, and other services normally
- Check Port Occupation: OAuth callbacks require specific ports (Gemini: 8085, Antigravity: 8086, Kiro: 19876-19880), ensure these ports are not occupied
- Clear Browser Cache: Try using incognito mode or clearing browser cache and retry
- Check Firewall Settings: Ensure the firewall allows access to local callback ports
- Docker Users: Ensure all OAuth callback ports are correctly mapped
Problem Description: When starting the service, it shows the port is already in use (e.g., EADDRINUSE).
Solutions:
# Windows - Find the process occupying the port
netstat -ano | findstr :3000
# Then use Task Manager to end the corresponding PID process
# Linux/macOS - Find and end the process occupying the port
lsof -i :3000
kill -9 <PID>Or modify the port configuration in configs/config.json to use a different port.
Problem Description: Docker container fails to start or exits immediately.
Solutions:
- Check Logs:
docker logs aiclient2apito view error messages - Check Mount Path: Ensure the local path in the
-vparameter exists and has read/write permissions - Check Port Conflicts: Ensure all mapped ports are not occupied on the host
- Re-pull Image:
docker pull justlikemaki/aiclient-2-api:latest
Problem Description: After uploading or configuring credential files, the system shows it cannot be recognized or format error.
Solutions:
- Check File Format: Ensure the credential file is valid JSON format
- Check File Path: Ensure the file path is correct, Docker users need to ensure the file is in the mounted directory
- Check File Permissions: Ensure the service has permission to read the credential file
- Regenerate Credentials: If credentials have expired, try re-authorizing via OAuth
Problem Description: API requests frequently return 429 Too Many Requests error.
Solutions:
- Configure Account Pool: Add multiple accounts to
provider_pools.json, enable polling mechanism - Configure Fallback: Configure
providerFallbackChaininconfig.jsonfor cross-type degradation - Reduce Request Frequency: Appropriately increase request intervals to avoid triggering rate limits
- Wait for Quota Reset: Free quotas usually reset daily or per minute
Problem Description: When requesting a specific model, it returns an error or shows the model is unavailable.
Solutions:
- Check Model Name: Ensure you're using the correct model name (case-sensitive)
- Check Provider Support: Confirm the currently configured provider supports that model
- Check Account Permissions: Some advanced models may require specific account permissions
- Configure Model Filtering: Use
notSupportedModelsto exclude unsupported models
Problem Description: Browser cannot open http://localhost:3000.
Solutions:
- Check Service Status: Confirm the service has started successfully, check terminal output
- Check Port Mapping: Docker users ensure
-p 3000:3000parameter is correct - Try Other Address: Try accessing
http://127.0.0.1:3000 - Check Firewall: Ensure the firewall allows access to port 3000
Problem Description: When using streaming output, the response is interrupted midway or incomplete.
Solutions:
- Check Network Stability: Ensure network connection is stable
- Increase Timeout: Increase request timeout in client configuration
- Check Proxy Settings: If using a proxy, ensure the proxy supports long connections
- Check Service Logs: Check for error messages
Problem Description: After modifying configuration in Web UI, service behavior doesn't change.
Solutions:
- Refresh Page: Refresh the Web UI page after modification
- Check Save Status: Confirm the configuration was saved successfully (check prompt messages)
- Restart Service: Some configurations may require service restart to take effect
- Check Configuration File: Directly check
configs/config.jsonto confirm changes were written
Problem Description: When calling API endpoints, it returns 404 Not Found error.
Solutions:
- Check Endpoint Path: Ensure you're using the correct endpoint path, such as
/v1/chat/completions,/ollama/api/chat, etc. - Check Client Auto-completion: Some clients (like Cherry-Studio, NextChat) automatically append paths (like
/v1/chat/completions) after the Base URL, causing path duplication. Check the actual request URL in the console and remove redundant path parts - Check Service Status: Confirm the service has started normally, visit
http://localhost:3000to view Web UI - Check Port Configuration: Ensure requests are sent to the correct port (default 3000)
- View Available Routes: Check "Interactive Routing Examples" on the Web UI dashboard page to see all available endpoints
Problem Description: When calling API endpoints, it returns Unauthorized: API key is invalid or missing. error.
Solutions:
- Check API Key Configuration: Ensure API Key is correctly configured in
configs/config.jsonor Web UI - Check Request Header Format: Ensure the request contains the correct Authorization header format, such as
Authorization: Bearer your-api-key - Check Service Logs: View detailed error messages on the "Real-time Logs" page in Web UI to locate the specific cause
This project follows the GNU General Public License v3 (GPLv3) license. For details, please check the LICENSE file in the root directory.
The development of this project was greatly inspired by the official Google Gemini CLI and referenced part of the code implementation of gemini-cli.ts in Cline 3.18.0. Sincere thanks to the Google official team and the Cline development team for their excellent work!
Thanks to all the developers who contributed to the AIClient-2-API project:
We are grateful for the support from our sponsors:
This project (AIClient-2-API) is for learning and research purposes only. Users assume all risks when using this project. The author is not responsible for any direct, indirect, or consequential losses resulting from the use of this project.
This project is an API proxy tool and does not provide any AI model services. All AI model services are provided by their respective third-party providers (such as Google, OpenAI, Anthropic, etc.). Users should comply with the terms of service and policies of each third-party service when accessing them through this project. The author is not responsible for the availability, quality, security, or legality of third-party services.
This project runs locally and does not collect or upload any user data. However, users should protect their API keys and other sensitive information when using this project. It is recommended that users regularly check and update their API keys and avoid using this project in insecure network environments.
Users should comply with the laws and regulations of their country/region when using this project. It is strictly prohibited to use this project for any illegal purposes. Any consequences resulting from users' violation of laws and regulations shall be borne by the users themselves.
