Replies: 3 comments 2 replies
-
Okay, progress. After looking at the ollama server api logs I saw the 404 errors, so it was getting to the server but the endpoint is incorrect int he gem:
Adding this monkey patch let it hit the correct endpoint, which I can see it returned. aresponse. The new problem is when it's parsing the response
|
Beta Was this translation helpful? Give feedback.
-
The only place this code appears is in Message
It's defaulted so I don't know why it would possibly be nil |
Beta Was this translation helpful? Give feedback.
-
I suggest adding /v1 to your configured base URL as specified here -- https://rubyllm.com/configuration#ollama-api-base-ollama_api_base |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm attempting to build a simple proof of concept for sending a message to deepseek-r1 running under ollama (I'm not intent on this model specifically, it's just the one I pulled because it supports tools and I just want to play around with it)
I have this initializer code:
Ollama is running and responds to curls locally:
But when I attempt to hit it with ruby llm, I don't get anything:
I've tried various combinations of specifying the model differently such as
deepseek/deepseek-r1
anddeepseek-r1:latest
anddeepseek/deepseek-r1:latest
but never get anything back. The warning makes me think I'm doing something wrong but not sure where to turn nextBeta Was this translation helpful? Give feedback.
All reactions