langchain-ai / langchain-google

MIT License
104 stars 121 forks source link

Add prompt_feedback/Safety filter info to VertexAI Gemini Response #218

Closed kardiff18 closed 4 months ago

kardiff18 commented 4 months ago

Right now, a user receives an empty string of text is a Safety filter blocks the response rather than the information about the safety filters (such as the probability, etc.) It would be great to have the safety filter dictionary returned as part of prompt feedback, similar to the non-VertexAI Gemini implementation https://github.com/langchain-ai/langchain-google/blob/c34ac8a60567a7226987b3b7cc6257ecf1f233f6/libs/genai/langchain_google_genai/chat_models.py#L526

lkuligin commented 4 months ago

it's part of the generation_info: https://github.com/langchain-ai/langchain-google/blob/d4dcff348e02751605d487b5c1979c7369437636/libs/vertexai/langchain_google_vertexai/_utils.py#L151

or am I missing anything?

kardiff18 commented 4 months ago

That's fair.. I guess i was hoping for the prompt_feedback directly. Based on the chain I'm using I can't get it from generation_info myself without writing a custom callback, which I can do it just would be nice if everything was returned natively.

You can close this for now.