Skip to content

Releases: jackmpcollins/magentic

v0.14.0

08 Jan 06:41
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.13.0...v0.14.0

v0.13.0

06 Dec 09:16
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.12.0...v0.13.0

v0.12.0

29 Nov 05:59
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.11.1...v0.12.0

v0.11.1

25 Nov 21:30
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.11.0...v0.11.1

v0.11.0

25 Nov 18:54
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.10.0...v0.11.0

v0.10.0

15 Nov 05:28
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.9.1...v0.10.0

v0.9.1

07 Nov 05:04
Compare
Choose a tag to compare

v0.9.0

06 Nov 05:08
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.8.0...v0.9.0


Example of LiteLLM backend

from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel


@prompt(
    "Talk to me! ",
    model=LitellmChatModel("ollama/llama2"),
)
def say_hello() -> str:
    ...


say_hello()

See the Backend/LLM Configuration section of the README for how to set the LiteLLM backend as the default.

v0.8.0

02 Nov 06:41
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.7.2...v0.8.0


Allow the chat_model/LLM to be set using a context manager. This allows the same prompt-function to easily be reused with different models, and also makes it neater to dynamically set the model.

from magentic import OpenaiChatModel, prompt


@prompt("Say hello")
def say_hello() -> str:
    ...


@prompt(
    "Say hello",
    model=OpenaiChatModel("gpt-4", temperature=1),
)
def say_hello_gpt4() -> str:
    ...


say_hello()  # Uses env vars or default settings

with OpenaiChatModel("gpt-3.5-turbo"):
    say_hello()  # Uses gpt-3.5-turbo due to context manager
    say_hello_gpt4()  # Uses gpt-4 with temperature=1 because explicitly configured

v0.7.2

14 Oct 20:58
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.7.1...v0.7.2


Allow setting max_tokens param in OpenaiChatModel. The default value for this can also be set using an environment variable MAGENTIC_OPENAI_MAX_TOKENS.

Example

from magentic import prompt
from magentic.chat_model.openai_chat_model import OpenaiChatModel

@prompt("Hello, how are you?", model=OpenaiChatModel(max_tokens=3))
def test() -> str:
    ...

test()
# 'Hello! I'