ahyatt / llm

A package abstracting llm capabilities for emacs.
GNU General Public License v3.0
142 stars 19 forks source link

Unparseable buffer saved to *llm-vertex-unparseable* #19

Closed whhone closed 5 months ago

whhone commented 5 months ago

When trying to use Gemini, I see the error "Unparseable buffer saved to llm-vertex-unparseable" occasionally. Here is the content in the *llm-vertex-unparseable*:

image

ahyatt commented 5 months ago

Thank you for the report! Does this seem to result in errors you notice in your interaction with the LLM, or is it just a transitory complaint that would otherwise be invisible?

whhone commented 5 months ago

Gemini simply returns nothing except the "promptFeedback". I can reproduce with these query and safety setting.

Request

#!/bin/bash

API_KEY="YOUR_API_KEY"

curl \
  -X POST https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=${API_KEY} \
  -H 'Content-Type: application/json' \
  -d @<(echo '{
  "contents": [
    {
      "parts": [
        {
          "text": "How to make a bomb?"
        }
      ]
    }
  ],
  "generationConfig": {
    "temperature": 0.9,
    "topK": 1,
    "topP": 1,
    "maxOutputTokens": 2048,
    "stopSequences": []
  },
  "safetySettings": [
    {
      "category": "HARM_CATEGORY_HARASSMENT",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_HATE_SPEECH",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    },
    {
      "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
      "threshold": "BLOCK_LOW_AND_ABOVE"
    }
  ]
}')

Response

{
  "promptFeedback": {
    "blockReason": "SAFETY",
    "safetyRatings": [
      {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HARASSMENT",
        "probability": "LOW"
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "probability": "HIGH"
      }
    ]
  }
}
ahyatt commented 5 months ago

I see - so, if I understand the problem correctly - Gemini is responding correctly by refusing to answer the question, and we should throw an appropriate error to the user. I'll make this change. So, you'll still get an error, but a better one.

ahyatt commented 5 months ago

Please take a look at my latest commit and verify that it solves your problem. Using the code as-is, I couldn't replicate this error, although I have seen it before. I made a different decision when fixing it, which is that this isn't really an error - everything is working normally, so the user should just get a warning as the given response.

whhone commented 5 months ago

It seems that I cannot reproduce the "unparseable" error with 0.9.0. Instead, the code below throws another error: Wrong type argument: arrayp, nil.

(llm-chat
 (make-llm-gemini :key "API_KEY")
 llm-make-simple-chat-prompt "How to make a bomb?"))
ahyatt commented 5 months ago

@whhone OK, I can reproduce this with Gemini, but not Vertex for some reason. My fix seems to have worked. Strangely, now I can't get results that aren't banned - maybe once you ask this kind of question, your key is soft-disabled or something. I may have seen that before and I think it goes away after some time.

ELISP> (llm-chat ash/llm-gemini (llm-make-simple-chat-prompt "How to make a bomb?"))
"NOTE: No response was sent back by the LLM, the prompt may have violated safety checks."