feat: add models command for OpenRouter model search and selection#1
feat: add models command for OpenRouter model search and selection#1bhadraagada merged 1 commit intomainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds a new models command to enable searching and selecting OpenRouter models interactively, and integrates model selection into the setup wizard for OpenRouter templates. However, it also includes unrelated changes that remove most provider templates from the configuration.
Key Changes:
- Adds new
src/commands/models.tswith OpenRouter API integration for listing, searching, and displaying model information - Integrates interactive model selection into the setup wizard for OpenRouter profiles
- Registers new CLI command
ccx modelswith options for listing, searching, and info display - Removes 9 out of 11 provider templates (official, deepseek, gemini, ollama, lmstudio, groq, together, custom, openrouter-deepseek)
- Updates remaining OpenRouter templates with new default models
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| src/commands/models.ts | New file implementing OpenRouter model fetching, listing, searching, and info display functionality |
| src/commands/wizard.ts | Adds selectModelInteractive function and integrates OpenRouter model selection into both full and quick setup flows |
| src/cli.ts | Registers new models command with options for list, search, info, and interactive selection |
| src/templates/providers.ts | Removes most provider templates and updates default models for remaining OpenRouter templates |
| pnpm-lock.yaml | Lockfile regeneration (likely from fresh install) |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| description: 'OpenRouter with MiniMax M2.1 model', | ||
| baseUrl: 'https://openrouter.ai/api', | ||
| defaultModel: 'minimax/minimax-m1-80k', | ||
| defaultModel: 'minimax/minimax-m2.1', |
There was a problem hiding this comment.
The default model was changed from 'minimax/minimax-m1-80k' to 'minimax/minimax-m2.1'. If this is an intentional model upgrade, it should be mentioned in the PR description. If not, this could be a mistake as it changes the default behavior for existing users of this template.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| let cachedModels: OpenRouterModel[] | null = null; | ||
|
|
||
| export async function fetchModels(): Promise<OpenRouterModel[]> { | ||
| if (cachedModels) return cachedModels; |
There was a problem hiding this comment.
The model cache is never invalidated once set. If the OpenRouter API updates its model list during a long-running session, users won't see the updated models. Consider adding a cache expiration mechanism or a way to force refresh the cache.
| export async function searchModelsInteractive(): Promise<string | null> { | ||
| console.log('Fetching models from OpenRouter...\n'); | ||
|
|
||
| const models = await fetchModels(); | ||
|
|
||
| if (models.length === 0) { | ||
| console.error('No models found or failed to fetch.'); | ||
| return null; | ||
| } | ||
|
|
||
| // First, ask for search term | ||
| const searchTerm = await input({ | ||
| message: 'Search models (or press enter to browse all):', | ||
| }); | ||
|
|
||
| let filtered = models; | ||
|
|
||
| if (searchTerm) { | ||
| const term = searchTerm.toLowerCase(); | ||
| filtered = models.filter(m => | ||
| m.id.toLowerCase().includes(term) || | ||
| m.name.toLowerCase().includes(term) | ||
| ); | ||
| } | ||
|
|
||
| if (filtered.length === 0) { | ||
| console.log(`No models found matching "${searchTerm}"`); | ||
| return null; | ||
| } | ||
|
|
||
| // Sort by name and limit choices | ||
| filtered.sort((a, b) => a.id.localeCompare(b.id)); | ||
| const choices = filtered.slice(0, 50).map(m => { | ||
| const promptPrice = parseFloat(m.pricing.prompt) * 1000000; | ||
| const contextStr = formatContext(m.context_length); | ||
| return { | ||
| name: `${m.id.padEnd(40)} ${contextStr.padEnd(10)} $${promptPrice.toFixed(2)}/1M`, | ||
| value: m.id, | ||
| }; | ||
| }); | ||
|
|
||
| if (filtered.length > 50) { | ||
| console.log(`\nShowing first 50 of ${filtered.length} matches. Use a more specific search term.\n`); | ||
| } | ||
|
|
||
| const selected = await select({ | ||
| message: 'Select a model:', | ||
| choices, | ||
| pageSize: 15, | ||
| }); | ||
|
|
||
| return selected; | ||
| } |
There was a problem hiding this comment.
The searchModelsInteractive function is exported but never used in the codebase (only appears in CLI command handler). If this is intended as a public API for the models command, it works correctly. However, there's duplication with the selectModelInteractive function in wizard.ts that has similar but not identical logic. Consider if these could be unified.
| // Models command - fetch and search available models from OpenRouter | ||
|
|
||
| import { select, input } from '@inquirer/prompts'; | ||
|
|
||
| interface OpenRouterModel { | ||
| id: string; | ||
| name: string; | ||
| description?: string; | ||
| pricing: { | ||
| prompt: string; | ||
| completion: string; | ||
| }; | ||
| context_length: number; | ||
| top_provider?: { | ||
| max_completion_tokens?: number; | ||
| }; | ||
| } | ||
|
|
||
| interface ModelsResponse { | ||
| data: OpenRouterModel[]; | ||
| } | ||
|
|
||
| let cachedModels: OpenRouterModel[] | null = null; | ||
|
|
||
| export async function fetchModels(): Promise<OpenRouterModel[]> { | ||
| if (cachedModels) return cachedModels; | ||
|
|
||
| try { | ||
| const response = await fetch('https://openrouter.ai/api/v1/models'); | ||
| if (!response.ok) { | ||
| throw new Error(`Failed to fetch models: ${response.statusText}`); | ||
| } | ||
| const data = await response.json() as ModelsResponse; | ||
| cachedModels = data.data || []; | ||
| return cachedModels; | ||
| } catch (error) { | ||
| console.error('Error fetching models:', (error as Error).message); | ||
| return []; | ||
| } | ||
| } | ||
|
|
||
| export async function listModels(searchTerm?: string, limit: number = 30): Promise<void> { | ||
| console.log('Fetching models from OpenRouter...\n'); | ||
|
|
||
| const models = await fetchModels(); | ||
|
|
||
| if (models.length === 0) { | ||
| console.error('No models found or failed to fetch.'); | ||
| return; | ||
| } | ||
|
|
||
| let filtered = models; | ||
|
|
||
| if (searchTerm) { | ||
| const term = searchTerm.toLowerCase(); | ||
| filtered = models.filter(m => | ||
| m.id.toLowerCase().includes(term) || | ||
| m.name.toLowerCase().includes(term) || | ||
| (m.description?.toLowerCase().includes(term)) | ||
| ); | ||
| } | ||
|
|
||
| if (filtered.length === 0) { | ||
| console.log(`No models found matching "${searchTerm}"`); | ||
| return; | ||
| } | ||
|
|
||
| // Sort by name | ||
| filtered.sort((a, b) => a.id.localeCompare(b.id)); | ||
|
|
||
| const showing = filtered.slice(0, limit); | ||
|
|
||
| console.log(`Found ${filtered.length} models${searchTerm ? ` matching "${searchTerm}"` : ''}:`); | ||
| if (filtered.length > limit) { | ||
| console.log(`(showing first ${limit}, use --limit to show more)\n`); | ||
| } else { | ||
| console.log(''); | ||
| } | ||
|
|
||
| // Print as table | ||
| console.log('Model ID'.padEnd(45) + 'Context'.padEnd(12) + 'Price (per 1M tokens)'); | ||
| console.log('─'.repeat(80)); | ||
|
|
||
| for (const model of showing) { | ||
| const promptPrice = parseFloat(model.pricing.prompt) * 1000000; | ||
| const completionPrice = parseFloat(model.pricing.completion) * 1000000; | ||
| const priceStr = `$${promptPrice.toFixed(2)} / $${completionPrice.toFixed(2)}`; | ||
| const contextStr = formatContext(model.context_length); | ||
|
|
||
| console.log( | ||
| model.id.padEnd(45) + | ||
| contextStr.padEnd(12) + | ||
| priceStr | ||
| ); | ||
| } | ||
|
|
||
| console.log(''); | ||
| console.log('Usage: ccx create <profile> --template openrouter --model <model-id>'); | ||
| } | ||
|
|
||
| export async function searchModelsInteractive(): Promise<string | null> { | ||
| console.log('Fetching models from OpenRouter...\n'); | ||
|
|
||
| const models = await fetchModels(); | ||
|
|
||
| if (models.length === 0) { | ||
| console.error('No models found or failed to fetch.'); | ||
| return null; | ||
| } | ||
|
|
||
| // First, ask for search term | ||
| const searchTerm = await input({ | ||
| message: 'Search models (or press enter to browse all):', | ||
| }); | ||
|
|
||
| let filtered = models; | ||
|
|
||
| if (searchTerm) { | ||
| const term = searchTerm.toLowerCase(); | ||
| filtered = models.filter(m => | ||
| m.id.toLowerCase().includes(term) || | ||
| m.name.toLowerCase().includes(term) | ||
| ); | ||
| } | ||
|
|
||
| if (filtered.length === 0) { | ||
| console.log(`No models found matching "${searchTerm}"`); | ||
| return null; | ||
| } | ||
|
|
||
| // Sort by name and limit choices | ||
| filtered.sort((a, b) => a.id.localeCompare(b.id)); | ||
| const choices = filtered.slice(0, 50).map(m => { | ||
| const promptPrice = parseFloat(m.pricing.prompt) * 1000000; | ||
| const contextStr = formatContext(m.context_length); | ||
| return { | ||
| name: `${m.id.padEnd(40)} ${contextStr.padEnd(10)} $${promptPrice.toFixed(2)}/1M`, | ||
| value: m.id, | ||
| }; | ||
| }); | ||
|
|
||
| if (filtered.length > 50) { | ||
| console.log(`\nShowing first 50 of ${filtered.length} matches. Use a more specific search term.\n`); | ||
| } | ||
|
|
||
| const selected = await select({ | ||
| message: 'Select a model:', | ||
| choices, | ||
| pageSize: 15, | ||
| }); | ||
|
|
||
| return selected; | ||
| } | ||
|
|
||
| function formatContext(contextLength: number): string { | ||
| if (contextLength >= 1000000) { | ||
| return `${(contextLength / 1000000).toFixed(1)}M`; | ||
| } else if (contextLength >= 1000) { | ||
| return `${(contextLength / 1000).toFixed(0)}K`; | ||
| } | ||
| return `${contextLength}`; | ||
| } | ||
|
|
||
| export async function getModelInfo(modelId: string): Promise<void> { | ||
| const models = await fetchModels(); | ||
| const model = models.find(m => m.id === modelId); | ||
|
|
||
| if (!model) { | ||
| console.error(`Model "${modelId}" not found.`); | ||
| return; | ||
| } | ||
|
|
||
| const promptPrice = parseFloat(model.pricing.prompt) * 1000000; | ||
| const completionPrice = parseFloat(model.pricing.completion) * 1000000; | ||
|
|
||
| console.log(`\nModel: ${model.id}`); | ||
| console.log('─'.repeat(50)); | ||
| console.log(`Name: ${model.name}`); | ||
| console.log(`Context: ${formatContext(model.context_length)} tokens`); | ||
| console.log(`Prompt: $${promptPrice.toFixed(4)} / 1M tokens`); | ||
| console.log(`Completion: $${completionPrice.toFixed(4)} / 1M tokens`); | ||
| if (model.description) { | ||
| console.log(`Description: ${model.description}`); | ||
| } | ||
| console.log(''); | ||
| } |
There was a problem hiding this comment.
The new models command and model selection functionality lack test coverage. Consider adding tests for fetchModels (including error cases), listModels, searchModelsInteractive, and getModelInfo functions to ensure reliability, especially for the API integration and error handling paths.
| displayName: 'OpenRouter (GLM-4.7)', | ||
| description: 'OpenRouter with ZhipuAI GLM-4.7 model', |
There was a problem hiding this comment.
The displayName was changed from "OpenRouter" to "OpenRouter (GLM-4.7)", which may cause confusion since users can now select any model via the new model search feature, not just GLM-4.7. The display name should reflect that this is a general OpenRouter template with GLM-4.7 as the default, or simply remain "OpenRouter" for clarity.
| displayName: 'OpenRouter (GLM-4.7)', | |
| description: 'OpenRouter with ZhipuAI GLM-4.7 model', | |
| displayName: 'OpenRouter', | |
| description: 'OpenRouter with ZhipuAI GLM-4.7 model (default)', |
| const isOpenRouter = templateChoice.startsWith('openrouter') || template.baseUrl.includes('openrouter'); | ||
|
|
There was a problem hiding this comment.
The OpenRouter detection uses both template name prefix check (startsWith('openrouter')) and URL check (includes('openrouter')). However, if a user provides a custom base URL that contains 'openrouter' but isn't actually OpenRouter, or changes the base URL to a custom one after selecting an OpenRouter template, this could trigger the model selection inappropriately. Consider making the check more explicit or adding additional validation.
| const isOpenRouter = templateChoice.startsWith('openrouter') || template.baseUrl.includes('openrouter'); | |
| let isOpenRouter = templateChoice.startsWith('openrouter'); | |
| if (!isOpenRouter && baseUrl) { | |
| try { | |
| const parsedUrl = new URL(baseUrl); | |
| const hostname = parsedUrl.hostname.toLowerCase(); | |
| if (hostname === 'openrouter.ai' || hostname.endsWith('.openrouter.ai')) { | |
| isOpenRouter = true; | |
| } | |
| } catch { | |
| // If baseUrl is not a valid URL, fall back to template-based detection only | |
| } | |
| } | |
| const promptPrice = parseFloat(model.pricing.prompt) * 1000000; | ||
| const completionPrice = parseFloat(model.pricing.completion) * 1000000; | ||
| const priceStr = `$${promptPrice.toFixed(2)} / $${completionPrice.toFixed(2)}`; |
There was a problem hiding this comment.
The parseFloat operations on model.pricing.prompt and model.pricing.completion could produce NaN if the API returns invalid pricing data. This would result in displaying 'NaN' to users. Consider adding validation or fallback values (e.g., 'N/A') for cases where pricing information is unavailable or invalid.
| const promptPrice = parseFloat(model.pricing.prompt) * 1000000; | |
| const completionPrice = parseFloat(model.pricing.completion) * 1000000; | |
| const priceStr = `$${promptPrice.toFixed(2)} / $${completionPrice.toFixed(2)}`; | |
| const promptPriceRaw = parseFloat(model.pricing.prompt); | |
| const completionPriceRaw = parseFloat(model.pricing.completion); | |
| let priceStr: string; | |
| if (Number.isNaN(promptPriceRaw) || Number.isNaN(completionPriceRaw)) { | |
| priceStr = 'N/A'; | |
| } else { | |
| const promptPrice = promptPriceRaw * 1000000; | |
| const completionPrice = completionPriceRaw * 1000000; | |
| priceStr = `$${promptPrice.toFixed(2)} / $${completionPrice.toFixed(2)}`; | |
| } |
| // Sort by name and limit choices | ||
| filtered.sort((a, b) => a.id.localeCompare(b.id)); | ||
| const choices = filtered.slice(0, 50).map(m => { | ||
| const promptPrice = parseFloat(m.pricing.prompt) * 1000000; |
There was a problem hiding this comment.
Same as in listModels, parseFloat on pricing could produce NaN. Additionally, this occurs in multiple places (lines 93, 134, 173-174). Consider extracting price calculation to a helper function that handles invalid values consistently.
| const isOpenRouter = templateChoice.startsWith('openrouter') || template.baseUrl.includes('openrouter'); | ||
|
|
||
| if (isOpenRouter) { | ||
| model = await selectModelInteractive(template.defaultModel); |
There was a problem hiding this comment.
The selectModelInteractive function can return undefined (when user selects default but defaultModel is not provided), but this isn't handled when assigning to model. If model becomes undefined and template.defaultModel is also undefined, the profile will be created with an undefined model, which could cause runtime errors. Consider adding a fallback or validation.
| model = await selectModelInteractive(template.defaultModel); | |
| model = await selectModelInteractive(template.defaultModel); | |
| // Fallback if no model was selected and no valid default is available | |
| if (!model) { | |
| if (template.defaultModel) { | |
| model = template.defaultModel; | |
| } else { | |
| model = await input({ | |
| message: 'Model name (optional):', | |
| }); | |
| } | |
| } |
| console.log(`\nUse with: ccx create <profile> --template openrouter --model ${model}`); | ||
| } | ||
| } else { | ||
| await listModels(search, parseInt(options.limit)); |
There was a problem hiding this comment.
The parseInt operation on options.limit could produce NaN if the user provides an invalid value. This would silently fail and pass NaN to listModels. Consider validating the parsed value or providing a default, for example: 'parseInt(options.limit) || 30'
| await listModels(search, parseInt(options.limit)); | |
| const parsedLimit = parseInt(options.limit, 10); | |
| const limit = !Number.isNaN(parsedLimit) && parsedLimit > 0 ? parsedLimit : 30; | |
| await listModels(search, limit); |
|
@bhadraagada I've opened a new pull request, #2, to work on those changes. Once the pull request is ready, I'll request review from you. |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@bhadraagada I've opened a new pull request, #3, to work on those changes. Once the pull request is ready, I'll request review from you. |
Adds models command and interactive model selection for OpenRouter