google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.39k stars 270 forks source link

response is not readable #166

Closed Andy963 closed 6 months ago

Andy963 commented 8 months ago

Description of the bug:

sometime's the gemini-pro resoponse is not readable, for example:

ofscrollingscrolling,makingitcompatiblewithvariousdevicesandinputmethods.\nExampleUsage:\n\n//Assuming"dom"isaDOMelementand"card"isanobjectwitha"value"propertyrepresentingthecontainer.\ncheckAndScrollToDiv(dom,card.value);\n\nInthiscontext,whenyoucallcheckAndScrollToDiv(dom,card.value),thefunctionwillcheckifthespecifiedDOMelement(dom)ispositionedoutofthevisibleareawithinthecontainer(card.value)and,ifso,scrollsthecontainertobringtheelementopintoviewatthetop.\nConclusion:\nThecheckAndScrollToDivfunctionplaysacrucialroleinimprovinguserexperienceandaccessibilitybyensuringthatelementsarevisiblewithinthecontainer.Itscompatibilitywithvariousdevicesandinputmethodsmakesitavaluabletoolforcreatinguser-friendlyapplications.

this is stream mode, and use: print(repr(resp.text)) there was no space between words, and not readable, even i add a note at the last of my question:

question += 'please return a formatted and human readable answer.'

but sometimes, i asked gemini-pro to return a readable string again, it works, sometimes not word.

Actual vs expected behavior:

actual: ihaveaquesiton expected response; i have a qeustion

Any other information you'd like to share?

python 3.9 google-ai-generativelanguage 0.4.0 google-api-core 2.15.0 google-auth 2.25.2 google-generativeai 0.3.2 googleapis-common-protos 1.62.0

meystari commented 7 months ago

Is there any update on this from Google team, any work around ?

I have tried a lot of options , but didnt get through it .

  1. Changed temperature, top k, top p
  2. Tried different python version (unfortunately firebase functions puython only support python 310 and python 311)
  3. Asking AI to generate with spaces/human readable everytime.
  4. Tried using streaming version of API.
  5. Unfortunately only text version supported by this API sees to be gemini-pro, as I tried bison models it said its not supported.
lintuxlin commented 7 months ago

@MarkDaoust Can you please help with this ?

For me it happens only if the chat history for chat is populated with a number of records.

Andy963 commented 7 months ago

From my experience:

  1. tell ai the answer format is not readable, no spaces between words, and send a human readable format again, most of the time it works, but not 100%.
  2. add a note(ask ai to send the answer in a human readable format) at the end of each question, works but stupid

and finally my solution: when "this" happens try to use openai or claude

meystari commented 7 months ago

It doesnt work with any amount of work. This is a simple test to reproduce this issue and it always generate text with no spaces, this does not happen if chathistory is not there. This does not happen trying from google cloud interface for gemini pro chat.

model = genai.GenerativeModel('gemini-pro')
chistory=[{'role': 'user', 'parts': '\nYou are a chatbot You will use a conversational and supportive style of interaction.'}, 
          {'role': 'model', 'parts': 'Ok, I agree. What can I do for you ?'}, 
          {'role': 'user', 'parts': 'Hi'}, 
          {'role': 'model', 'parts': 'Hi there,how are you doing today?'},
          {'role': 'user', 'parts': '. How are you'}, 
          {'role': 'model', 'parts': 'I am well,thank you.?'}, 
          {'role': 'user', 'parts': 'Please use space in text'}, 
          {'role': 'model', 'parts': 'Ok , how are you feeling today ?'},
          {'role': 'user', 'parts': 'great'}, 
          {'role': 'model', 'parts': "That's great to.Can you tell me more about you're feeling?"}, 
          {'role': 'user', 'parts': '. amazing'}, {'role': 'model', 'parts': "Amazing! That's owonderful to hear?"}]
chat = model.start_chat(history=chistory)
response = chat.send_message("?")
response.text
lintuxlin commented 7 months ago

Yes, actually it starts happening if you have more than 1 set of user/model chat in the chat_history. Like this

model = genai.GenerativeModel('gemini-pro')
chistory=[{'role': 'user', 'parts': '\nYou are a chatbot You will use a conversational and supportive style of interaction.'}, 
          {'role': 'model', 'parts': 'Ok, I agree. What can I do for you ?'}, 
          {'role': 'user', 'parts': 'Hi'}, 
          {'role': 'model', 'parts': 'Hi there,how are you doing today?'},

  ]
chat = model.start_chat(history=chistory)
response = chat.send_message("How are you ?")
response.text
piresramon commented 7 months ago

Try with:

chistory=[{'role': 'user', 'parts': ['\nYou are a chatbot You will use a conversational and supportive style of interaction.']}, 
          {'role': 'model', 'parts': ['Ok, I agree. What can I do for you ?']}, 
          {'role': 'user', 'parts': ['Hi']}, 
          {'role': 'model', 'parts': ['Hi there,how are you doing today?']},

  ]
MarkDaoust commented 7 months ago

@piresramon - Thanks. Yeah, that's it. It looks like the current implementation didn't check str before checking iterable when interpreting parts. Interpreting a string as a list of characters is never what you want, I'll fix.

meystari commented 6 months ago

Awesome, thank you @MarkDaoust @piresramon

meystari commented 6 months ago

Unfortunately that's not helping , I added string inside the array and its still return without spaces, when therei s more than 1 pairs of chats . For time being I swtiched to chat bison model, and vertex AI, and its helping me get past this problem.

piresramon commented 6 months ago

Hi @meystari. I tried with a 6-turn conversation, and it's working for me. Below is the code I am using:

model = genai.GenerativeModel('gemini-pro')
chistory = [
        {'role': 'user', 'parts': ['Hi']},
    {'role': 'model', 'parts': ['Hi there, how are you doing today?']},
    {'role': 'user', 'parts': ["I'm doing well, thank you. I wanted to ask you about artificial intelligence."]},
    {'role': 'model', 'parts': ["Sure, I'd be happy to discuss AI with you. What specifically would you like to know?"]},
    {'role': 'user', 'parts': ["I'm curious about the current applications of AI in various industries."]},
    {'role': 'model', 'parts': ["AI is being used across many sectors such as healthcare for diagnostics, finance for fraud detection, and even in agriculture for crop monitoring and optimization. It\'s quite versatile."]},
    {'role': 'user', 'parts': ["That sounds fascinating! What about the ethical considerations surrounding AI?"]},
    {'role': 'model', 'parts': ["Ethical considerations in AI are critical. They include issues like bias in algorithms, job displacement, and the potential for misuse of AI-powered technologies."]},
    {'role': 'user', 'parts': ["How do we address these ethical concerns effectively?"]},
    {'role': 'model', 'parts': ["Addressing ethical concerns requires collaboration between technologists, policymakers, ethicists, and society at large. Transparency, accountability, and inclusive design are crucial in developing ethical AI systems."]}
]
chat = model.start_chat(history=chistory)
response = chat.send_message("What was the first answer you gave me at the beginning of the conversation?")
response.text
# 'My first response was "Hi there, how are you doing today?"'

Just for reference, I am using the version '0.3.2'.