- Download ollama CLI https://github.com/ollama/ollama
- In One CLI window start server: ollama serve
- In Another CLI window:
- Download model: ollama pull deepseek-r1:8b
- Run model: ollama run deepseek-r1:8b
- Chat with model.
or docker https://hub.docker.com/r/ollama/ollama
- docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- docker exec -it ollama ollama run deepseek-r1:8b
curl http://127.0.0.1:11434/api/tags
curl http://127.0.0.1:11434/api/generate -d '{"model": "deepseek-r1:8b", "stream": false, "prompt": "1+1?" }'
python3 -m venv .venv
source .venv/bin/activate (or .venv\Scripts\activate on Windows)
pip install -U ollama
or
pip3 install -U ollama --break-system-packages
python3 DeepSeek-ollama2.py
DeepSeek Models: https://ollama.com/library/deepseek-r1
Ollama Python Library: https://pypi.org/project/ollama/