simonw / llm-palm

Plugin for LLM adding support for Google's PaLM 2 model
Apache License 2.0
14 stars 4 forks source link

`Palm.execute` does not handle some inputs gracefully #6

Open Spstolar opened 1 year ago

Spstolar commented 1 year ago

I noticed in many cases where no response would be generated. In particular, when I try to pass a simple Python function with a request for type hints it returns nothing if I supply more than the first two lines of the function.

At first, I thought this was a restriction of PaLM. When I reconstructed the call, I was getting a BlockedReason.OTHER in the filters section of the response. But I was able to make such a request through MakerSuite and via direct call to google.generativeai. Thus, it's something particular about how the request in Palm.chat is formulated.

Diving into the possible issue, I can see that when you pass only a prompt, build_prompt_messages is used to assign a value to "messages" in kwargs. Then kwargs is passed to google.generativeai.chat. I then think the issue is something about how messages is formatted for chat, but I am not sure how to correct that.

As an alternative, it seems possible to call google.generativeai.chat but just change "messages" to "prompt" when there is not a conversation passed. This works in directly reconstructing the process. I'll see if I can make the modification and will submit a PR. This will likely resolve this issue as well.

Spstolar commented 1 year ago

It appears that the fix wasn't so simple. Something about the format of my input keeps triggering the filter. A workaround is to check if there were filters but no response and then call google.generativeai.generate_text instead. This works for my problem, but doesn't quite correct the issue generally.