Open fikrisandi opened 2 months ago
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes
Non-configurable safety filters, which block child sexual abuse material (CSAM) and personally identifiable information (PII).
Expected Behavior
The Vertex AI Generative AI model should return a response with valid text even if the response is blocked by safety_settings because it is considered malicious content (finish_reason: "PROHIBITED_CONTENT"). The model should still provide access to security metadata (safety_ratings) to help understand the reasons for blocking. In short, I want to handle the error by retrieving the directory list of finish_reason, safety_ratings, and etc instead of the Exception.
Actual Behavior
Currently the model produce exception valueError exception related to Safety Block instead returning Safety Block information (eg: finish_reason, safety_ratings, and etc) which can be useful for understanding the reason for the block.
My Code
Output Code
If I give a negative prompt like a pedophile the program will give an output like this:
Thank You