Skip to content

Unable to disable thinking/reasoning for Ollama models in Continue chat (JetBrains, v1.0.60) #11265

@FaunoGuazina

Description

@FaunoGuazina

Error Details

  • Model: deepseek-coder-6.7b
  • Provider: ollama
  • Status Code: 400

Error Output

"registry.ollama.ai/library/deepseek-coder:6.7b does not support thinking"

Additional Context

Environment

  • Continue plugin version: 1.0.60
  • JetBrains platform version: 2024.1+
  • Continue build date: Feb 05, 2026
  • IDE: IntelliJ IDEA
  • OS: macOS
  • Provider: Ollama
  • Models tested:
    • llama3.1:8b
    • deepseek-coder:6.7b
    • phi3:mini
    • qwen2.5-coder:7b

Problem

I am trying to disable thinking/reasoning for local Ollama models in Continue, but Continue still sends requests in chat as if thinking were enabled.

As a result, Continue chat keeps failing with errors like:

  • "registry.ollama.ai/library/llama3.1:8b does not support thinking"
  • "registry.ollama.ai/library/deepseek-coder:6.7b does not support thinking"
  • "registry.ollama.ai/library/phi3:mini does not support thinking"

Important detail

The problem appears to affect the chat flow specifically.

If I use Edit mode with Command+I, it works correctly.
If I use the normal chat, it fails with "does not support thinking".

Expected behavior

If I explicitly set reasoning: false in the model configuration, Continue should stop sending thinking/reasoning-related parameters or behavior for those models in chat as well as edit flows.

Actual behavior

Even with reasoning disabled in config.yaml, Continue chat still behaves as if thinking is enabled, and Ollama models that do not support it fail.

At the same time, Edit mode (Command+I) works correctly with the same setup, which suggests the issue may be specific to the chat path rather than the model configuration itself.

Current ~/.continue/config.yaml

name: Local Config
version: 1.0.0
schema: v1

models:
  - name: llama3.1-8b
    provider: ollama
    model: llama3.1:8b
    defaultCompletionOptions:
      reasoning: false

  - name: deepseek-coder-6.7b
    provider: ollama
    model: deepseek-coder:6.7b
    defaultCompletionOptions:
      reasoning: false

  - name: phi3-mini
    provider: ollama
    model: phi3:mini
    defaultCompletionOptions:
      reasoning: false

Notes

  • ~/.continue/config.yaml exists and is being edited correctly.
  • There is no ~/.continue/config.json
  • There is no ~/.continue/config.ts
  • In ~/.continue I only have config.yaml plus Continue internal folders/files.
  • I searched for "thinking" under ~/.continue and the repeated errors appear in logs/core.log.
  • The models are correctly installed in Ollama and work when run directly from terminal.

Ollama models installed locally

  • deepseek-coder:6.7b
  • qwen2.5-coder:7b
  • llama3.1:8b
  • phi3:mini

Relevant log evidence from ~/.continue/logs/core.log

{"error":"registry.ollama.ai/library/deepseek-coder:6.7b does not support thinking"}
{"error":"registry.ollama.ai/library/llama3.1:8b does not support thinking"}
{"error":"registry.ollama.ai/library/phi3:mini does not support thinking"}

Additional context

At one point I also got this config parsing error:

Failed to parse config: models: Expected array, received null

But even after correcting the YAML structure and keeping models as a proper array, the main issue remains: Continue chat still appears to enable thinking internally for Ollama models that do not support it.

Steps to reproduce

  1. Install Continue version 1.0.60 in IntelliJ IDEA (JetBrains 2024.1+).
  2. Use Ollama as provider.
  3. Configure local models in ~/.continue/config.yaml with:
defaultCompletionOptions:
  reasoning: false
  1. Select one of these models in Continue.
  2. Send a prompt in chat.
  3. Observe Continue logs and failed requests mentioning that the model "does not support thinking".
  4. Then try Edit mode with Command+I using the same model.
  5. Observe that Edit mode works, while chat fails.

Impact

This makes the configured Ollama models unusable in Continue chat, even though:

  • they work correctly in Ollama itself
  • they work in Continue Edit mode via Command+I

Question

Is this a bug in how Continue chat handles reasoning=false for Ollama models, or is there another required setting to fully disable thinking in chat?

Metadata

Metadata

Labels

area:chatRelates to chat interfaceide:jetbrainsRelates specifically to JetBrains extensionkind:bugIndicates an unexpected problem or unintended behavioros:macHappening specifically on Mac

Type

No type

Projects

Status

Todo

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions