Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using locally installed models on Mac M1/Use of alternative models like DEEPSEEK #2200

Closed
BroccoliFin opened this issue Jan 12, 2025 · 4 comments

Comments

@BroccoliFin
Copy link

Thank you for your work.
There are several questions.

  • Is it possible to use a locally installed Hermes 3 Llama 3.1 8B model on Mac M 1 when creating an agent?
  • If so, how and where to indicate the model used and the path to it, I did not find this information in the tutorial or it is not obvious?
  • Is it possible to use models like DeepSeek v.2, v.3 by specifying the api key as in the case of OpenAI or are only those in the list available?
  • If so, how can this be done? Is it enough to specify the following configuration in the .env.example file:
    DEEPSEEK_MODEL= , DEEPSEEK_API_KEY= and change accordingly in character.json "modelProvider": "deepseek" or is this more fundamental stuff that requires meddling with other files?
Copy link
Contributor

Hello @BroccoliFin! Welcome to the ai16z community. Thank you for opening your first issue; we appreciate your contribution. You are now a ai16z contributor!

@AIFlowML
Copy link
Collaborator

Hello. @BroccoliFin

In your env you need to add this data

Ollama Configuration

OLLAMA_SERVER_URL=localhost:11434
OLLAMA_MODEL=model_name
USE_OLLAMA_EMBEDDING= # Set to TRUE for OLLAMA/1024, leave blank for local
OLLAMA_EMBEDDING_MODEL= # Default: mxbai-embed-large
SMALL_OLLAMA_MODEL= # Default: llama3.2
MEDIUM_OLLAMA_MODEL= # Default: hermes3
LARGE_OLLAMA_MODEL= # Default: hermes3:70b

Deepseek is coming.

@BroccoliFin
Copy link
Author

Many thanks for the feedback and tips!!!

@BroccoliFin
Copy link
Author

Another question - the documentation says that for local launch you need a GPU with a CUDA, but what about the combined memory on the Mac M1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants