Skip to content

Latest commit

 

History

History
79 lines (57 loc) · 3.19 KB

query-embedding-models.mdx

File metadata and controls

79 lines (57 loc) · 3.19 KB
meta content tags dates
title description
How to query embedding models
Learn how to interact with embedding models using Scaleway's Generative APIs service.
h1 paragraph
How to query embedding models
Learn how to interact with embedding models using Scaleway's Generative APIs service.
generative-apis ai-data embedding-models embeddings-api
validation posted
2024-10-30
2024-08-28

Scaleway's Generative APIs service allows users to interact with embedding models hosted on the platform. The embedding API provides a simple interface for generating vector representations (embeddings) based on your input data. The embedding service is OpenAI compatible. Refer to OpenAI's embedding documentation for more detailed information.

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • A valid API key for API authentication
  • Python 3.7+ installed on your system

Querying embedding models via API

The embedding model inputs text and outputs a vector (list) of floating point numbers to use for tasks like similarity comparisons and search. The instructions below show you how to query the model programmatically using the OpenAI SDK.

Installing the OpenAI SDK

First, ensure you have the OpenAI SDK installed in your development environment. You can install it using pip:

pip install openai

Initializing the client

Initialize the OpenAI client with your base URL and API key:

from openai import OpenAI

# Initialize the client with your base URL and API key
client = OpenAI(
    base_url="https://api.scaleway.ai/v1",  # Scaleway's Generative APIs service URL
    api_key="<SCW_API_KEY>"  # Your unique API key from Scaleway
)

Generating embeddings with bge-multilingual-gemma2

You can now generate embeddings using the bge-multilingual-gemma2 model, such as the following example:

# Generate embeddings using the 'bge-multilingual-gemma2' model
embedding_response = client.embeddings.create(
    input= "Artificial Intelligence is transforming the world.",
    model= "bge-multilingual-gemma2"
)

# Output the embedding vector
print(embedding_response.data[0].embedding)

This code sends input text to the bge-multilingual-gemma2 embedding model and returns a vector representation of the text. The bge-multilingual-gemma2 model is specifically designed for generating high-quality sentence embeddings.

Model parameters and their effects

The following parameters can be adjusted to influence the output of the embedding model:

  • input (string or array of strings): The text or data you want to convert into vectors.
  • model (string): The specific embedding model to use, find all our supported models.
If you encounter an error such as "Forbidden 403" refer to the [API documentation](/generative-apis/api-cli/understanding-errors) for troubleshooting tips.