Support for Foundry Local #10440
Replies: 3 comments 3 replies
-
|
Okay I found it. It's not listed as a provider, but we can use it since it's openai compatible: name: Local Config
version: 1.0.0
schema: v1
models:
- name: foundry
provider: openai
model: AUTODETECT
apiBase: http://localhost:64337/v1
useLegacyCompletionsEndpoint: true |
Beta Was this translation helpful? Give feedback.
-
|
Foundry Local uses an OpenAI-compatible API, so you can use it with Continue today via the generic OpenAI provider. Configuration: models:
- name: foundry-local
provider: openai
model: phi-4 # or whatever model you have loaded
apiBase: http://localhost:5272/v1
apiKey: not-needed # Foundry Local does not require auth
roles:
- chat
- editSteps:
Verify it works: curl http://localhost:5272/v1/modelsTips:
If you want native provider support:
Might be worth a feature request if there is enough interest! We test various local inference backends at Revolution AI — Foundry Local + Continue works smoothly via OpenAI compat. |
Beta Was this translation helpful? Give feedback.
-
|
Has anyone been able to get this to work with autocomplete? I have the following config: - name: Qwen2.5-coder
provider: openai
model: qwen2.5-coder-1.5b-instruct-generic-cpu:4
apiBase: http://localhost:5272/v1
useLegacyCompletionsEndpoint: false
roles:
- autocomplete
- edit
- chatNo error card is displayed, but in the developer tools console I see Chat functionality works fine and I have tried setting the autocomplete timeout to the maximum of 5000ms. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is Foundry Local supported? If yes, where is it? If no, are there plans to add it?
https://www.foundrylocal.ai/
Beta Was this translation helpful? Give feedback.
All reactions