Different results from LM Studio and Ollama with same local model (read_file) #10859
Unanswered
Sab3rRid3r
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When I use Ollama to request information about content from the project using local models, access is not granted correctly. A message is displayed stating that access is to be granted, but this does not happen.
When using LM Studio, a “Continue read” (read_file) always occurs and access works.
I tested this with Ollama on Linux and macOS and LM Studio on macOS and Windows.
We have a Linux server with the appropriate GPU in our company that provides models via Ollama. It would be nice to find out why it works worse with Ollama than with LM Studio.
Thanks in advance.
Model qwen3-coder-30b 4bit (has tool_use capabilities)
Continue 1.0.60
LM Studio 0.4.5+2
Ollama 0.17.0
provider: lmstudio
model: qwen/qwen3-coder-30b
defaultCompletionOptions:
contextLength: 64000
temperature: 0.2
topP: 0.95
topK: 40
provider: ollama
model: qwen3-coder:30b
defaultCompletionOptions:
contextLength: 256000
maxTokens: 20000
temperature: 0.2
topP: 0.95
topK: 40
Beta Was this translation helpful? Give feedback.
All reactions