You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work.
There are several questions.
Is it possible to use a locally installed Hermes 3 Llama 3.1 8B model on Mac M 1 when creating an agent?
If so, how and where to indicate the model used and the path to it, I did not find this information in the tutorial or it is not obvious?
Is it possible to use models like DeepSeek v.2, v.3 by specifying the api key as in the case of OpenAI or are only those in the list available?
If so, how can this be done? Is it enough to specify the following configuration in the .env.example file:
DEEPSEEK_MODEL= , DEEPSEEK_API_KEY= and change accordingly in character.json "modelProvider": "deepseek" or is this more fundamental stuff that requires meddling with other files?
The text was updated successfully, but these errors were encountered:
Hello @BroccoliFin! Welcome to the ai16z community. Thank you for opening your first issue; we appreciate your contribution. You are now a ai16z contributor!
Thank you for your work.
There are several questions.
DEEPSEEK_MODEL= , DEEPSEEK_API_KEY= and change accordingly in character.json "modelProvider": "deepseek" or is this more fundamental stuff that requires meddling with other files?
The text was updated successfully, but these errors were encountered: