-
Notifications
You must be signed in to change notification settings - Fork 203
Open
Labels
P3feature requestIdeas to improve an integrationIdeas to improve an integrationintegration:fastembed
Description
Is your feature request related to a problem? Please describe.
I can't run the embedder on GPU so it is ridiculously slow
Describe the solution you'd like
Right now the DocumentEmbedder and TextEmbedder don't have the option to pass the provider for a model to the backend that creates the model:
Lines 48 to 50 in 9db9ca1
| self.model = TextEmbedding( | |
| model_name=model_name, cache_dir=cache_dir, threads=threads, local_files_only=local_files_only | |
| ) |
This is the standard way to enable GPU support according to the FastEmbed docs:
embedding_model = TextEmbedding(
model_name="BAAI/bge-small-en-v1.5", providers=["CUDAExecutionProvider"]
)Seems like it would be a ~6 line change
Metadata
Metadata
Assignees
Labels
P3feature requestIdeas to improve an integrationIdeas to improve an integrationintegration:fastembed