You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to be able to extract SafetyRatings from ChatResponse returned by VertexAiGeminiChatClient.call(Prompt prompt).
GenerateContentResponse contains list of SafetyRatings on each Candidate, so the data is already obtained in the library.
Current Behavior
VertexAiGeminiChatClient.call(Prompt prompt) returns ChatResponse which does not have information about SafetyRatings returned from Gemini.
Context
I'm not sure how many LLM models returns similar information, but it would be a useful feature to store it in generic format in Generation objects of ChatResponse, instead of loosing it in this abstraction layer.