Skip to content

Use ollama's structured outputs feature #242

Closed as duplicate of#582
Closed as duplicate of#582
@daniel-j-h

Description

@daniel-j-h

I'm looking into pydantic-ai with small and locally running ollama models as backbones.

I'm noticing that sometimes even for simple models it's possible to run into unexpected ValidationErrors.

Here's what I mean: With a pydantic model as simple as

class Answer(BaseModel):
    value: str = ""

I can see pytandic-ai sometimes retrying and failing in validation.

Having experience with llama.cpp's grammars this was unexpected to me. I was under the assumption that pydantic-ai would transform the pydantic model into a grammar or json schema to hard-restrict the llm's output accordingly. Then validation could never fail by design since the llm's output is restricted to the specific grammar.

Instead when I debug the request pydantic-ai sends to the locally running ollama with

nc -l -p 11434

I can see pydantic-ai turning the pydantic model into a tool use invocation.

With ollama v0.5.0 structured output via json schema is now supported:

https://github.com/ollama/ollama/releases/tag/v0.5.0

I was wondering if that would solve the issue of small locally running models sometimes running into validation errors, since we hard-restrict the output to the shape of our pydantic model.

Any thoughts on this, or ideas why validation can fail with tool usage as implemented right now? Any pointers in terms of for which model providers validation might fail and for what reason? Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions