Skip to content

simonw/llm-nomic-api-embed

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-nomic-api-embed

PyPI Changelog Tests License

Create embeddings using the Nomic API

Installation

Install this plugin in the same environment as LLM.

llm install llm-nomic-api-embed

Usage

This plugin requires a Nomic API key. These include a generous free allowance for their embedding API.

Configure the key like this:

llm keys set nomic
# Paste key here

You can then use the Nomic embedding models like this:

llm embed -m nomic-1.5 -c 'hello world'

This will return a 768 item floating point array as JSON.

See the LLM embeddings documentation for more you can do with the tool.

Models

Run llm embed-models for a full list. The Nomic models are:

nomic-embed-text-v1 (aliases: nomic-1)
nomic-embed-text-v1.5 (aliases: nomic-1.5)
nomic-embed-text-v1.5-512 (aliases: nomic-1.5-512)
nomic-embed-text-v1.5-256 (aliases: nomic-1.5-256)
nomic-embed-text-v1.5-128 (aliases: nomic-1.5-128)
nomic-embed-text-v1.5-64 (aliases: nomic-1.5-64)
nomic-embed-vision-v1
nomic-embed-vision-v1.5
nomic-embed-combined-v1
nomic-embed-combined-v1.5

Vision models can be used with image files using the --binary option, for example:

llm embed-multi images --files . '*.png' \
  --binary --model nomic-embed-vision-v1.5

Combined vision and text models

The nomic-embed-combined-v1 and nomic-embed-combined-v1.5 models are special - they will automatically use their respective text models for text inputs and their respective vision models for images.

This means you can use them to create a collection that mixes images and text, or you can create an image collection with them and then use text to find similar images.

Here's how do do that for a photos/ directory full of JPEGs:

llm embed-multi --binary -m nomic-embed-combined-v1.5 \
  -d photos.db photos --files photos/ '*.jpeg'

Then run similarity searches like this:

llm similar photos -d photos.db -c pelican

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-nomic-api-embed
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

About

Create embeddings for LLM using the Nomic API

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages