Skip to content

Python API Requests to work with Ollama Server running Large Language Models

Notifications You must be signed in to change notification settings

wkwwa/deepseek-ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Ollama - Run Large Language Models - DeepSeek

  1. Download ollama CLI https://github.com/ollama/ollama
  2. In One CLI window start server: ollama serve
  3. In Another CLI window:
  4. Download model: ollama pull deepseek-r1:8b
  5. Run model: ollama run deepseek-r1:8b
  6. Chat with model.

or docker https://hub.docker.com/r/ollama/ollama

  1. docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
  2. docker exec -it ollama ollama run deepseek-r1:8b

CURL

curl http://127.0.0.1:11434/api/tags
curl http://127.0.0.1:11434/api/generate -d '{"model": "deepseek-r1:8b", "stream": false, "prompt": "1+1?" }'

Install Ollama Python Library

python3 -m venv .venv 
source .venv/bin/activate (or .venv\Scripts\activate on Windows)
pip install -U ollama

or

pip3 install -U ollama --break-system-packages

Run

python3 DeepSeek-ollama2.py

Models and Library

DeepSeek Models: https://ollama.com/library/deepseek-r1

Ollama Python Library: https://pypi.org/project/ollama/

About

Python API Requests to work with Ollama Server running Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages