-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Ollama] ERROR: Non-retryable error occurred: 404 page not found #182
Comments
What's the url for your docker setup?
http://host.docker.internal:11434/API/generate? The 404 is indicating that
it's found the server, but can't find the endpoint (path).
…On Tue, Feb 4, 2025, 6:59 AM Marko Dzidic ***@***.***> wrote:
Description:
I'm running ComfyUI inside a Docker container along with an Ollama local
server. I have edited urls.json, and the models load correctly. However,
when I attempt to generate an enhanced prompt, I receive the following
error from the "Troubleshooting" section:
➤ Begin Log for: Advanced Prompt Enhancer, Node #6:
✦ INFO: Additional parameters input: []
✦ INFO: Setting client to OpenAI Open Source LLM object
✦ INFO: Maximum tries set to: 3
✦ ERROR: Non-retryable error occurred: 404 page not found
✦ ERROR: Request failed: 404 page not found
➤ Begin Log for: Ollama Unload Model Setting:
✦ INFO: URL was validated and is being presented as: http://host.docker.internal:11434/api/generate
✦ INFO: Attempting to set model TTL using URL: http://host.docker.internal:11434/api/generate
✦ INFO: Model unload setting successful. Response: {"model":"llama3.1:latest","created_at":"2025-02-04T13:17:00.393244595Z","response":"","done":true,"done_reason":"load"}
Expected Behavior:
The enhanced prompt should generate successfully without errors.
Additional Information:
- ComfyUI version: 0.3.13
- Ollama version: 0.3.9
—
Reply to this email directly, view it on GitHub
<#182>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHONJOJABKHR5VWV5F6NUPD2ODBVRAVCNFSM6AAAAABWOUSTS2VHI2DSMVQWIX3LMV43ASLTON2WKOZSHAZTAMZVGY4DSMY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
You can also try running it with the http.../generate url using the AI_service: Ollama: http://host.internal:11434/v1 |
Ollama URL |
Good news, enjoy! :) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description:
I'm running ComfyUI inside a Docker container along with an Ollama local server. I have edited
urls.json
, and the models load correctly. However, when I attempt to generate an enhanced prompt, I receive the following error from the "Troubleshooting" section:Expected Behavior:
The enhanced prompt should generate successfully without errors.
Additional Information:
The text was updated successfully, but these errors were encountered: