Skip to content

VertexAiGeminiChatClient loses information about Safety Ratings when returning ChatResponse #687

@rafal-dudek

Description

@rafal-dudek

Expected Behavior

I would like to be able to extract SafetyRatings from ChatResponse returned by VertexAiGeminiChatClient.call(Prompt prompt).
GenerateContentResponse contains list of SafetyRatings on each Candidate, so the data is already obtained in the library.

image

Current Behavior

VertexAiGeminiChatClient.call(Prompt prompt) returns ChatResponse which does not have information about SafetyRatings returned from Gemini.

Context

I'm not sure how many LLM models returns similar information, but it would be a useful feature to store it in generic format in Generation objects of ChatResponse, instead of loosing it in this abstraction layer.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions