Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions python/semantic_kernel/connectors/ai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ All base clients inherit from the [`AIServiceClientBase`](../../services/ai_serv
| | [`GoogleAITextEmbedding`](./google/google_ai/services/google_ai_text_embedding.py) |
| HuggingFace | [`HuggingFaceTextCompletion`](./hugging_face/services/hf_text_completion.py) |
| | [`HuggingFaceTextEmbedding`](./hugging_face/services/hf_text_embedding.py) |
| [MiniMax](./minimax/README.md) | [`MiniMaxChatCompletion`](./minimax/services/minimax_chat_completion.py) |
| Mistral AI | [`MistralAIChatCompletion`](./mistral_ai/services/mistral_ai_chat_completion.py) |
| | [`MistralAITextEmbedding`](./mistral_ai/services/mistral_ai_text_embedding.py) |
| [Nvidia](./nvidia/README.md) | [`NvidiaTextEmbedding`](./nvidia/services/nvidia_text_embedding.py) |
Expand Down
62 changes: 62 additions & 0 deletions python/semantic_kernel/connectors/ai/minimax/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# semantic_kernel.connectors.ai.minimax

This connector enables integration with MiniMax API for chat completion. It allows you to use MiniMax's models within the Semantic Kernel framework.

MiniMax provides an OpenAI-compatible API, making integration straightforward.

## Quick start

### Initialize the kernel
```python
import semantic_kernel as sk
kernel = sk.Kernel()
```

### Add MiniMax chat completion service
You can provide your API key directly or through environment variables.
```python
from semantic_kernel.connectors.ai.minimax import MiniMaxChatCompletion

chat_service = MiniMaxChatCompletion(
ai_model_id="MiniMax-M2.5", # Default model if not specified
api_key="your-minimax-api-key", # Can also use MINIMAX_API_KEY env variable
service_id="minimax-chat" # Optional service identifier
)
kernel.add_service(chat_service)
```

### Basic chat completion
```python
response = await kernel.invoke_prompt("Hello, how are you?")
```

### Using with Chat Completion Agent
```python
from semantic_kernel.agents import ChatCompletionAgent
from semantic_kernel.connectors.ai.minimax import MiniMaxChatCompletion

agent = ChatCompletionAgent(
service=MiniMaxChatCompletion(),
name="SK-Assistant",
instructions="You are a helpful assistant.",
)
response = await agent.get_response(messages="Write a haiku about Semantic Kernel.")
print(response.content)
```

## Available Models

- `MiniMax-M2.5` - Standard model with 204K context window
- `MiniMax-M2.5-highspeed` - High-speed variant with 204K context window

## Environment Variables

| Variable | Description |
|----------|-------------|
| `MINIMAX_API_KEY` | Your MiniMax API key |
| `MINIMAX_BASE_URL` | API endpoint (defaults to `https://api.minimax.io/v1`) |
| `MINIMAX_CHAT_MODEL_ID` | Default chat model ID |

## Notes

- MiniMax API requires temperature to be in the range (0.0, 1.0]. A value of exactly 0.0 is not accepted.
15 changes: 15 additions & 0 deletions python/semantic_kernel/connectors/ai/minimax/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright (c) Microsoft. All rights reserved.

from semantic_kernel.connectors.ai.minimax.prompt_execution_settings.minimax_prompt_execution_settings import (
MiniMaxChatPromptExecutionSettings,
MiniMaxPromptExecutionSettings,
)
from semantic_kernel.connectors.ai.minimax.services.minimax_chat_completion import MiniMaxChatCompletion
from semantic_kernel.connectors.ai.minimax.settings.minimax_settings import MiniMaxSettings

__all__ = [
"MiniMaxChatCompletion",
"MiniMaxChatPromptExecutionSettings",
"MiniMaxPromptExecutionSettings",
"MiniMaxSettings",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Copyright (c) Microsoft. All rights reserved.
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Copyright (c) Microsoft. All rights reserved.

from typing import Annotated, Any, Literal

from pydantic import BaseModel, Field

from semantic_kernel.connectors.ai.prompt_execution_settings import PromptExecutionSettings


class MiniMaxPromptExecutionSettings(PromptExecutionSettings):
"""Settings for MiniMax prompt execution."""

pass


class MiniMaxChatPromptExecutionSettings(MiniMaxPromptExecutionSettings):
"""Settings for MiniMax chat prompt execution."""

messages: list[dict[str, str]] | None = None
ai_model_id: Annotated[str | None, Field(serialization_alias="model")] = None
temperature: Annotated[float | None, Field(gt=0.0, le=1.0)] = None
top_p: float | None = None
n: int | None = None
stream: bool = False
stop: str | list[str] | None = None
max_tokens: int | None = None
presence_penalty: float | None = None
frequency_penalty: float | None = None
user: str | None = None
tools: list[dict[str, Any]] | None = None
tool_choice: str | dict[str, Any] | None = None
response_format: (
dict[Literal["type"], Literal["text", "json_object"]] | dict[str, Any] | type[BaseModel] | type | None
) = None
seed: int | None = None
extra_headers: dict | None = None
extra_body: dict | None = None
timeout: float | None = None

def prepare_settings_dict(self, **kwargs) -> dict[str, Any]:
"""Prepare the settings as a dictionary for the API request."""
return self.model_dump(
exclude={"service_id", "extension_data", "structured_json_response", "response_format"},
exclude_none=True,
by_alias=True,
)
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Copyright (c) Microsoft. All rights reserved.
Loading
Loading