Releases: jackmpcollins/magentic
Releases · jackmpcollins/magentic
v0.14.0
What's Changed
- Add stop param by @jackmpcollins and @mnicstruwig in #80
Full Changelog: v0.13.0...v0.14.0
v0.13.0
What's Changed
- Bump jupyter-server from 2.7.2 to 2.11.2 by @dependabot in #75
- Allow setting api_key in OpenaiChatModel by @jackmpcollins in #76
Full Changelog: v0.12.0...v0.13.0
v0.12.0
What's Changed
- Bump aiohttp from 3.8.6 to 3.9.0 by @dependabot in #70
- Add OpenAI seed param for deterministic sampling by @jackmpcollins in #71
Full Changelog: v0.11.1...v0.12.0
v0.11.1
v0.11.0
What's Changed
- Add support for Azure via OpenaiChatModel by @jackmpcollins in #65
Full Changelog: v0.10.0...v0.11.0
v0.10.0
v0.9.1
Full Changelog: v0.9.0...v0.9.1
v0.9.0
What's Changed
- Add LiteLLM backend by @jackmpcollins in #54
Full Changelog: v0.8.0...v0.9.0
Example of LiteLLM backend
from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel
@prompt(
"Talk to me! ",
model=LitellmChatModel("ollama/llama2"),
)
def say_hello() -> str:
...
say_hello()
See the Backend/LLM Configuration section of the README for how to set the LiteLLM backend as the default.
v0.8.0
What's Changed
- Make backend configurable by @jackmpcollins in #46
- Bump urllib3 from 2.0.6 to 2.0.7 by @dependabot in #47
- Replace black with ruff formatter by @jackmpcollins in #48
- Handle pydantic generic BaseModel in name_type and function schema by @jackmpcollins in #52
- Allow ChatModel to be set with context manager by @jackmpcollins in #53
Full Changelog: v0.7.2...v0.8.0
Allow the chat_model/LLM to be set using a context manager. This allows the same prompt-function to easily be reused with different models, and also makes it neater to dynamically set the model.
from magentic import OpenaiChatModel, prompt
@prompt("Say hello")
def say_hello() -> str:
...
@prompt(
"Say hello",
model=OpenaiChatModel("gpt-4", temperature=1),
)
def say_hello_gpt4() -> str:
...
say_hello() # Uses env vars or default settings
with OpenaiChatModel("gpt-3.5-turbo"):
say_hello() # Uses gpt-3.5-turbo due to context manager
say_hello_gpt4() # Uses gpt-4 with temperature=1 because explicitly configured
v0.7.2
What's Changed
- Allow setting max_tokens param by @jackmpcollins in #45
Full Changelog: v0.7.1...v0.7.2
Allow setting max_tokens
param in OpenaiChatModel
. The default value for this can also be set using an environment variable MAGENTIC_OPENAI_MAX_TOKENS
.
Example
from magentic import prompt
from magentic.chat_model.openai_chat_model import OpenaiChatModel
@prompt("Hello, how are you?", model=OpenaiChatModel(max_tokens=3))
def test() -> str:
...
test()
# 'Hello! I'