Skip to content

Commit 3543454

Browse files
authored
docs: update recommended model links (#7345)
1 parent 19ee8bd commit 3543454

File tree

9 files changed

+12
-77
lines changed

9 files changed

+12
-77
lines changed

docs/customization/models.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,11 @@ description: "These blocks form the foundation of the entire assistant experienc
1818

1919
| Model role | Best open models | Best closed models | Notes |
2020
|:------------|:------------------|:-------------------|:--------|
21-
| Agent / Plan | Qwen 3 Coder (480B), Qwen 3 Coder (30B), Devstral (24B), GLM 4.5 (355B), GLM 4.5 Air (106B), Kimi K2 (1T), gpt-oss (120B), gpt-oss (20B) | [Claude Opus 4.1](https://hub.continue.dev/anthropic/claude-4-1-opus), [Claude Sonnet 4](https://hub.continue.dev/anthropic/claude-4-sonnet), [GPT-5](https://hub.continue.dev/openai/gpt-5), [Gemini 2.5 Pro](https://hub.continue.dev/google/gemini-2.5-pro) | Closed models are slightly better than open models |
22-
| Chat / Edit | Qwen 3 Coder (480B), Qwen 3 Coder (30B), gpt-oss (120B), gpt-oss (20B) | [Claude Opus 4.1](https://hub.continue.dev/anthropic/claude-4-1-opus), [Claude Sonnet 4](https://hub.continue.dev/anthropic/claude-4-sonnet), [GPT-5](https://hub.continue.dev/openai/gpt-5), [Gemini 2.5 Pro](https://hub.continue.dev/google/gemini-2.5-pro) | Closed and open models have pretty similar performance |
23-
| Autocomplete | [QwenCoder2.5 (1.5B)](https://hub.continue.dev/ollama/qwen2.5-coder-1.5b), QwenCoder2.5 (7B) | [Codestral](https://hub.continue.dev/mistral/codestral), Mercury Coder | Closed models are slightly better than open models |
21+
| Agent / Plan | Qwen 3 Coder (480B), Qwen 3 Coder (30B), Qwen2.5-Coder (32B), Devstral (27B), Devstral (24B), GLM 4.5 (355B), GLM 4.5 Air (106B), Kimi K2 (1T), gpt-oss (120B), gpt-oss (20B) | [Claude Opus 4.1](https://hub.continue.dev/anthropic/claude-4-1-opus), [Claude Sonnet 4](https://hub.continue.dev/anthropic/claude-4-sonnet), GPT-4, [GPT-5](https://hub.continue.dev/openai/gpt-5), [Gemini 2.5 Pro](https://hub.continue.dev/google/gemini-2.5-pro), DeepSeek models | Closed models are slightly better than open models |
22+
| Chat / Edit | Qwen 3 Coder (480B), Qwen 3 Coder (30B), gpt-oss (120B), gpt-oss (20B), DeepSeek Chat | [Claude Opus 4.1](https://hub.continue.dev/anthropic/claude-4-1-opus), [Claude Sonnet 4](https://hub.continue.dev/anthropic/claude-4-sonnet), [GPT-5](https://hub.continue.dev/openai/gpt-5), [Gemini 2.5 Pro](https://hub.continue.dev/google/gemini-2.5-pro) | Closed and open models have pretty similar performance |
23+
| Autocomplete | [QwenCoder2.5 (1.5B)](https://hub.continue.dev/ollama/qwen2.5-coder-1.5b), QwenCoder2.5 (7B) | [Codestral](https://hub.continue.dev/mistral/codestral), Mercury Coder, Mercury Coder Small, DeepSeek Coder | Closed models are slightly better than open models |
2424
| Apply | N/A | [Relace Instant Apply](https://hub.continue.dev/relace/instant-apply), [Morph Fast Apply](https://hub.continue.dev/morphllm/morph-v2) | Open models are basically non-existent / not good enough for this model role |
25-
| Embed | N/A | [Voyage Code 3](https://hub.continue.dev/voyageai/voyage-code-3), [Morph Embeddings](https://hub.continue.dev/morphllm/morph-embedding-v2), Codestral Embed | Open models are basically non-existent / not good enough for this model role |
25+
| Embed | Nomic Embed Text | [Voyage Code 3](https://hub.continue.dev/voyageai/voyage-code-3), [Morph Embeddings](https://hub.continue.dev/morphllm/morph-embedding-v2), Codestral Embed, text-embedding-3-large, text-embedding-004 | Open embeddings models are emerging but closed models still perform better |
2626
| Rerank | zerank-1, zerank-1-small | rerank-2.5, Relace Code Rerank, [Morph Rerank](https://hub.continue.dev/morphllm/morph-rerank-v2) | Open models are beginning to emerge for this model role |
2727
| Next Edit | Zeta | Mercury Coder | Closed models are significantly better than open models |
2828

docs/customize/model-providers/top-level/azure.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ If you use Azure Machine Learning Studio to deploy Codestral:
9292

9393
## How to Configure Azure AI Foundry Embeddings Models
9494

95-
We recommend configuring **text-embedding-3-large** as your embeddings model.
95+
For recommended embeddings models, please refer to our [Model Recommendations page](/customization/models).
9696

9797
<Tabs>
9898
<Tab title="YAML">

docs/customize/model-providers/top-level/deepseek.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ We recommend configuring **DeepSeek Chat** as your chat model.
4141

4242
## How to Set Up DeepSeek Autocomplete Models
4343

44-
We recommend configuring **DeepSeek Coder** as your autocomplete model.
44+
For recommended autocomplete models, please refer to our [Model Recommendations page](/customization/models).
4545

4646
<Tabs>
4747
<Tab title="YAML">

docs/customize/model-providers/top-level/gemini.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Gemini currently does not offer any autocomplete models.
4848

4949
## How to Configure Gemini Embeddings Models
5050

51-
We recommend configuring **text-embedding-004** as your embeddings model.
51+
For recommended embeddings models, please refer to our [Model Recommendations page](/customization/models).
5252

5353
<Tabs>
5454
<Tab title="YAML">

docs/customize/model-providers/top-level/ollama.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ We recommend configuring **Qwen2.5-Coder 1.5B** as your autocomplete model.
6565

6666
## How to Set Up Ollama Embeddings Models
6767

68-
We recommend configuring **Nomic Embed Text** as your embeddings model.
68+
For recommended embeddings models, please refer to our [Model Recommendations page](/customization/models).
6969

7070
<Tabs>
7171
<Tab title="YAML">

docs/features/agent/model-setup.mdx

Lines changed: 1 addition & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -27,22 +27,7 @@ Instead of relying solely on native tool calling APIs (which vary between provid
2727
- **Better reliability** - Models that struggle with native tools often perform better with system message tools
2828
- **Seamless switching** - Change between providers without modifying your workflow
2929

30-
### What Models Are Recommended for Agent Mode
31-
32-
For the best Agent mode experience, we recommend models with strong reasoning and instruction-following capabilities:
33-
34-
**Premium Models:**
35-
36-
- **Claude Sonnet 4** (Anthropic) - Our top recommendation for its exceptional tool use and reasoning
37-
- **GPT-4** models (OpenAI) - Excellent native tool support
38-
- **DeepSeek** models - Strong performance with competitive pricing
39-
40-
**Local Models:**
41-
While more limited in capabilities, these models can work with system message tools:
42-
43-
- Qwen2.5-Coder 32B - Best local option for Agent mode
44-
- Devstral 27B - Good for code-specific tasks
45-
- Smaller models (7B-13B) - May struggle with complex tool interactions
30+
For recommended Agent models, please refer to our [Model Recommendations page](/customization/models).
4631

4732
### How to Configure Agent Mode
4833

docs/features/autocomplete/model-setup.mdx

Lines changed: 1 addition & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -10,45 +10,7 @@ Setting up the right model for autocomplete is crucial for a smooth coding exper
1010
For a complete comparison of all autocomplete models, see our [comprehensive model recommendations](/customization/models#recommended-models).
1111
</Info>
1212

13-
## Recommended Models for Autocomplete in Continue
14-
15-
### Hosted (Best Performance)
16-
17-
For the highest quality autocomplete suggestions, we recommend **[Codestral](https://hub.continue.dev/mistral/codestral)** from Mistral.
18-
19-
This model is specifically designed for code completion and offers excellent performance across multiple programming languages.
20-
21-
**Codestral Quick Setup:**
22-
23-
1. Get your API key from [Mistral AI](https://console.mistral.ai)
24-
2. Add [Codestral](https://hub.continue.dev/mistral/codestral) to your assistant on Continue Hub
25-
3. Add `MISTRAL_API_KEY` as a [User Secret](https://docs.continue.dev/hub/secrets/secret-types#user-secrets) on Continue Hub [here](https://hub.continue.dev/settings/secrets)
26-
4. Click `Reload config` in the assistant selector in the Continue IDE extension
27-
28-
### Hosted (Best Speed/Quality Tradeoff)
29-
30-
For fast, quality autocomplete suggestions, we recommend **[Mercury Coder Small](https://hub.continue.dev/inceptionlabs/mercury-coder-small)** from Inception.
31-
32-
This model is specifically designed for code completion and is particularly fast because it is a diffusion model.
33-
34-
**Mercury Coder Small Quick Setup:**
35-
36-
1. Get your API key from [Inception](https://platform.inceptionlabs.ai/)
37-
2. Add [Mercury Coder Small](https://hub.continue.dev/inceptionlabs/mercury-coder-small) to your assistant on Continue Hub
38-
3. Add `INCEPTION_API_KEY` as a [User Secret](https://docs.continue.dev/hub/secrets/secret-types#user-secrets) on Continue Hub [here](https://hub.continue.dev/settings/secrets)
39-
4. Click `Reload config` in the assistant selector in the Continue IDE extension
40-
41-
### Local (Offline / Privacy First)
42-
43-
For a fully local autocomplete experience, we recommend **[Qwen 2.5 Coder 1.5B](https://hub.continue.dev/ollama/qwen2.5-coder-1.5b)**.
44-
45-
This model provides good suggestions while keeping your code completely private.
46-
47-
**Quick Setup:**
48-
49-
1. Install [Ollama](https://ollama.ai/)
50-
2. Add [Qwen 2.5 Coder 1.5B](https://hub.continue.dev/ollama/qwen2.5-coder-1.5b) to your assistant on Continue Hub
51-
3. Click `Reload config` in the assistant selector in the Continue IDE extension
13+
For model recommendations, please refer to our [Model Recommendations page](/customization/models).
5214

5315
## Next Edit Model
5416

docs/features/chat/model-setup.mdx

Lines changed: 1 addition & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,7 @@ The model you use for for Chat mode will be
77
For a comprehensive comparison of all available models by role, see our [model recommendations table](/customization/models#recommended-models).
88
</Info>
99

10-
## What Models Are Recommended for Chat?
11-
12-
Our strong recommendation is to use [Claude Sonnet 4](https://hub.continue.dev/anthropic/claude-4-sonnet) from Anthropic.
13-
14-
Its strong tool calling and reasoning capabilities make it the best model for Agent mode.
15-
16-
1. Get your API key from [Anthropic](https://console.anthropic.com/)
17-
2. Add [Claude Sonnet 4](https://hub.continue.dev/anthropic/claude-4-sonnet) to your assistant on Continue Hub
18-
3. Add `ANTHROPIC_API_KEY` as a [User Secret](https://docs.continue.dev/hub/secrets/secret-types#user-secrets) on Continue Hub [here](https://hub.continue.dev/settings/secrets)
19-
4. Click `Reload config` in the assistant selector in the Continue IDE extension
10+
For model recommendations, please refer to our [Model Recommendations page](/customization/models).
2011

2112
### What Other Hosted Models Are Available?
2213

docs/features/edit/model-setup.mdx

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,7 @@ The recommended models and how to set them up can be found [here](/features/chat
1616

1717
We also recommend setting up an Apply model for the best Edit experience.
1818

19-
**Recommended Apply models:**
20-
21-
- [Morph v3](https://hub.continue.dev/morphllm/morph-v2)
22-
- [Relace Instant Apply](https://hub.continue.dev/relace/instant-apply)
19+
For recommended Apply models, please refer to our [Model Recommendations page](/customization/models).
2320

2421
## How to Determine Model Compatibility
2522

0 commit comments

Comments
 (0)