Skip to content

This server acts as a bridge between applications expecting OpenAI's chat completion format and Web gpt.

Notifications You must be signed in to change notification settings

zxcvbs1/selenium_openrouter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Web Models OpenRouter Proxy

Transform any web-based AI model into an OpenRouter-compatible API that works with Cline, Continue, Cursor, and other OpenAI-compatible tools.

πŸš€ Quick Start

1. Setup (Already Complete!)

# Virtual environment and dependencies are ready
uv --version  # βœ… Working

2. Choose Your Mode

Option A: Mock Server (Testing/Demo)

# Start mock server (no Chrome required)
uv run python test_server.py
# Server runs on: http://localhost:8001

Option B: Real Google AI Studio

# Start real server (requires Chrome)
uv run python openrouter_server.py  
# Server runs on: http://localhost:8000

πŸ”§ Tool Configuration

Cline (VS Code Extension)

  1. Open Cline settings
  2. Configure:
    • Provider: OpenAI Compatible
    • Base URL: http://localhost:8001/v1 (mock) or http://localhost:8000/v1 (real)
    • API Key: dummy-key
    • Model: google/gemini-pro

Continue.dev

Edit ~/.continue/config.json:

{
  "models": [
    {
      "title": "Gemini Pro (Web)",
      "provider": "openai", 
      "model": "google/gemini-pro",
      "apiBase": "http://localhost:8001/v1",
      "apiKey": "dummy-key"
    }
  ]
}

Cursor IDE

  1. Settings β†’ Models β†’ Add Custom Model
  2. Configure:
    • Provider: OpenAI Compatible
    • Base URL: http://localhost:8001/v1
    • Model: google/gemini-pro
    • API Key: dummy-key

πŸ“Š Available Models

  • google/gemini-pro - Google's Gemini Pro via AI Studio
  • google/gemini-pro-vision - Gemini Pro with vision capabilities

πŸ§ͺ Testing

Test Mock Server

# In terminal 1:
uv run python test_server.py

# In terminal 2:
uv run python test_mock_client.py

Test Real Server (with Chrome)

# In terminal 1:
uv run python openrouter_server.py

# In terminal 2: 
uv run python client_test.py

Quick Validation

uv run python quick_test.py

πŸ”Œ API Endpoints

  • GET /v1/models - List available models
  • POST /v1/chat/completions - Chat completions (streaming & non-streaming)
  • GET /health - Health check

πŸ’‘ Usage Examples

Python (OpenAI Client)

import openai

client = openai.OpenAI(
    api_key="dummy-key",
    base_url="http://localhost:8001/v1"  # or 8000 for real
)

response = client.chat.completions.create(
    model="google/gemini-pro",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL

curl -X POST http://localhost:8001/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "google/gemini-pro",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

πŸ›  Adding More Models

To add support for other web-based models:

  1. Create LLM wrapper (similar to GoogleAIStudioLLM)
  2. Add to model registry in server
  3. Update get_llm_instance() function

Example for ChatGPT:

# Create chatgpt_llm.py
class ChatGPTWebLLM(LLM):
    # Implementation for chat.openai.com
    pass

# Add to openrouter_server.py
AVAILABLE_MODELS["openai/gpt-4-web"] = {...}

πŸ” Troubleshooting

Mock Server Issues

  • βœ… Should work out of the box
  • Check port 8001 is available

Real Server Issues

  • Install Chrome browser
  • Check Google AI Studio login
  • Verify port 8000 is available
  • Check firewall settings

Tool Integration Issues

  • Ensure server is running
  • Verify correct base URL
  • Check API key (use "dummy-key")
  • Confirm model name matches exactly

πŸ“ Project Structure

β”œβ”€β”€ google_ai_studio_llm.py    # Real Google AI Studio wrapper
β”œβ”€β”€ mock_llm.py                # Mock LLM for testing
β”œβ”€β”€ openrouter_server.py       # Real server (port 8000)
β”œβ”€β”€ test_server.py             # Mock server (port 8001)
β”œβ”€β”€ client_test.py             # Test real server
β”œβ”€β”€ test_mock_client.py        # Test mock server
β”œβ”€β”€ quick_test.py              # Quick validation
└── README.md                  # This file

🎯 What You've Built

  • OpenRouter-compatible API for any web-based AI model
  • Mock testing environment that works without Chrome
  • Real Google AI Studio integration via Selenium
  • Full streaming support for real-time responses
  • Easy tool integration with Cline, Continue, Cursor, etc.
  • Extensible architecture for adding more models

πŸš€ Ready to Use!

Your setup is complete and tested. You can now:

  1. Use the mock server for development and testing
  2. Add Chrome and use the real Google AI Studio integration
  3. Configure your favorite coding tools to use your local OpenRouter
  4. Extend to other web-based models as needed

Happy coding! πŸŽ‰

About

This server acts as a bridge between applications expecting OpenAI's chat completion format and Web gpt.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages