langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
88.71k stars 13.94k forks source link

Feature: No safety settings when using langchain_google_genai's ChatGoogleGenerativeAI #15095

Open 7vansh7 opened 6 months ago

7vansh7 commented 6 months ago

Feature request

The safety settings are there in the google_generativeai library are are not there in the langchain_google_genai library The safety settings is an basically array of dictionaries passed when sending the prompt

Motivation

The problem with not having this is that when we use the ChatGoogleGenerativeAI model, if there is some kind of prompt which violate the basic safety settings then the model won't return with your answer

If we can change the safety settings and send it with the prompt to the model we could fix this issue

Your contribution

I am currently reading the code of the library and will raise a PR if i could fix the issue

dosubot[bot] commented 6 months ago

🤖

That's great to hear that you're looking into the code and considering raising a PR to address this issue. Your contribution will definitely be valuable to the LangChain community. If you have any questions or need any assistance while working on the PR, feel free to ask. Keep up the good work!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

lrbmike commented 6 months ago

I have a similar problem. When I want to use genai.configure function to set something, but I don't know how to set these in the langchain_google_genai library. And I find that the source code only deals with the setting of api_key, genai.configure(api_key=google_api_key)

7vansh7 commented 6 months ago

yes, it is quite frustrating because it triggers the safety warnings with most politics related text or pdfs, i am currently working on fixing it, will raise the PR once its done

7vansh7 commented 6 months ago

just added the PR, will close the issue once its merged

Spritan commented 5 months ago

Any updates?

rayanfer32 commented 5 months ago

Any updates ?

vinnyricciardi commented 5 months ago

@Spritan and @rayanfer32 From the PR above, it looks like you can just add safety_settings=None when you initiate your langchain model. For example:

langchain_model = ChatGoogleGenerativeAI(
        model="gemini-pro", google_api_key=GOOGLE_API_KEY,
        safety_settings=None
    )

Or be more specific:

    langchain_model = ChatGoogleGenerativeAI(
        model="gemini-pro",
        google_api_key=GOOGLE_API_KEY,
        temperature=.2,
        safety_settings={
                HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
                HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
                HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
                HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE
            },
    )
baron commented 4 months ago

Google GenerativeAI is still missing the safety_settings which were added to VertexAI. Without any default values set Google GenerativeAI is prone to silently fail.

https://github.com/langchain-ai/langchain/pull/15344

https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-genai/langchain_google_genai/llms.py

ironerumi commented 4 months ago

@baron I think the team has fixed it and it's working for me with version 0.0.9

16836

baron commented 4 months ago

@ironerumi Thanks for letting me know! Yes, it seems fixed now. I can finally work with the official wrapper again 👍

blackslashcreative commented 4 months ago

It seems like this is still not working? I tried setting BLOCK_NONE for everything and it still won't invoke a proper response. It still returns "I am not able to answer that question... Would you like me to try something different?" I'm using below, which I think is close:

llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, }, )

baron commented 4 months ago

It seems like this is still not working? I tried setting BLOCK_NONE for everything and it still won't invoke a proper response. It still returns "I am not able to answer that question... Would you like me to try something different?" I'm using below, which I think is close:

llm = ChatGoogleGenerativeAI( model="gemini-pro", temperature=0.7, safety_settings={ HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, }, )

            HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
            HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
            HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
            HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
        }       

That's basically what I have too and unfortunately Google will still flat out refuse some queries. I highly recommend you try Langsmith to trace these calls so you can precisely narrow down the server response and add a fallback mechanism within your chain so it doesn't stop the execution. If you increase debugging it should show more helpful errors (but sometimes it will just fail). You could also try using GoogleGenerativeAI instead.

EthanNadler commented 3 months ago

I am curious about this as well. I have tried using both llm = ChatGoogleGenerativeAI(safety_settings=None,model="gemini-pro",temperature=0.7, top_p=0.85) and the enumerated values and both times is fails. If anyone has a solution to this please let us know

ironerumi commented 3 months ago

It's just my assumption that when lowering the threshold Google will response more sensitive contents, but even if when set to None it won't response truly harmful contents like describe how to create a bomb in the kitchen or something.

So while asking those truly harmful questions, the difference when setting the threshold or not, will be whether you will get an answer like "I cannot tell you that" or the return content will be empty.

MaharshiYeluri02 commented 3 months ago

'BLOCK NONE' is restricted, try the following settings import google.generativeai as genai

safety_settings={ genai.types.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_HARASSMENT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_HATE_SPEECH: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH, genai.types.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: genai.types.HarmBlockThreshold.BLOCK_ONLY_HIGH }

marwan-elsafty commented 1 month ago

Neither BLOC_NONE, nor BLOCK_ONLY_HIGH are working

ChungNYCU commented 3 weeks ago

image yea, even I set block only high it will still block the MEDIUM content

gemini_safety_setting: dict = {}
            for category in HarmCategory:
                gemini_safety_setting[category] = HarmBlockThreshold.BLOCK_ONLY_HIGH
            # Initialize the GoogleGenerativeAI instance with the API key
            self.ai = ChatGoogleGenerativeAI(
                google_api_key=google_api_key,
                model=os.environ.get('GEMINI_MODEL'),
                safety_settings=gemini_safety_setting
            )

return msg

google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
  category: HARM_CATEGORY_SEXUALLY_EXPLICIT
  probability: MEDIUM
}
safety_ratings {
  category: HARM_CATEGORY_HATE_SPEECH
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_HARASSMENT
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_DANGEROUS_CONTENT
  probability: NEGLIGIBLE
}
ez945y commented 2 weeks ago

we really want this setting work very urgent, thanks

jfperusse-bhvr commented 1 week ago

I also noticed the parameter on ChatGoogleGenerativeAI is ignored. However, you can pass safety_settings to your model invoke method (or using model.bind(...) when chaining with LCEL) and it works fine.