Open Spstolar opened 1 year ago
It appears that the fix wasn't so simple. Something about the format of my input keeps triggering the filter. A workaround is to check if there were filters but no response and then call google.generativeai.generate_text
instead. This works for my problem, but doesn't quite correct the issue generally.
I noticed in many cases where no response would be generated. In particular, when I try to pass a simple Python function with a request for type hints it returns nothing if I supply more than the first two lines of the function.
At first, I thought this was a restriction of PaLM. When I reconstructed the call, I was getting a
BlockedReason.OTHER
in thefilters
section of the response. But I was able to make such a request through MakerSuite and via direct call togoogle.generativeai
. Thus, it's something particular about how the request inPalm.chat
is formulated.Diving into the possible issue, I can see that when you pass only a prompt,
build_prompt_messages
is used to assign a value to"messages"
inkwargs
. Thenkwargs
is passed togoogle.generativeai.chat
. I then think the issue is something about how messages is formatted for chat, but I am not sure how to correct that.As an alternative, it seems possible to call
google.generativeai.chat
but just change"messages"
to"prompt"
when there is not a conversation passed. This works in directly reconstructing the process. I'll see if I can make the modification and will submit a PR. This will likely resolve this issue as well.