Closed Andy963 closed 6 months ago
Is there any update on this from Google team, any work around ?
I have tried a lot of options , but didnt get through it .
@MarkDaoust Can you please help with this ?
For me it happens only if the chat history for chat is populated with a number of records.
From my experience:
and finally my solution: when "this" happens try to use openai or claude
It doesnt work with any amount of work. This is a simple test to reproduce this issue and it always generate text with no spaces, this does not happen if chathistory is not there. This does not happen trying from google cloud interface for gemini pro chat.
model = genai.GenerativeModel('gemini-pro')
chistory=[{'role': 'user', 'parts': '\nYou are a chatbot You will use a conversational and supportive style of interaction.'},
{'role': 'model', 'parts': 'Ok, I agree. What can I do for you ?'},
{'role': 'user', 'parts': 'Hi'},
{'role': 'model', 'parts': 'Hi there,how are you doing today?'},
{'role': 'user', 'parts': '. How are you'},
{'role': 'model', 'parts': 'I am well,thank you.?'},
{'role': 'user', 'parts': 'Please use space in text'},
{'role': 'model', 'parts': 'Ok , how are you feeling today ?'},
{'role': 'user', 'parts': 'great'},
{'role': 'model', 'parts': "That's great to.Can you tell me more about you're feeling?"},
{'role': 'user', 'parts': '. amazing'}, {'role': 'model', 'parts': "Amazing! That's owonderful to hear?"}]
chat = model.start_chat(history=chistory)
response = chat.send_message("?")
response.text
Yes, actually it starts happening if you have more than 1 set of user/model chat in the chat_history. Like this
model = genai.GenerativeModel('gemini-pro')
chistory=[{'role': 'user', 'parts': '\nYou are a chatbot You will use a conversational and supportive style of interaction.'},
{'role': 'model', 'parts': 'Ok, I agree. What can I do for you ?'},
{'role': 'user', 'parts': 'Hi'},
{'role': 'model', 'parts': 'Hi there,how are you doing today?'},
]
chat = model.start_chat(history=chistory)
response = chat.send_message("How are you ?")
response.text
Try with:
chistory=[{'role': 'user', 'parts': ['\nYou are a chatbot You will use a conversational and supportive style of interaction.']},
{'role': 'model', 'parts': ['Ok, I agree. What can I do for you ?']},
{'role': 'user', 'parts': ['Hi']},
{'role': 'model', 'parts': ['Hi there,how are you doing today?']},
]
@piresramon - Thanks. Yeah, that's it. It looks like the current implementation didn't check str
before checking iterable
when interpreting parts. Interpreting a string as a list of characters is never what you want, I'll fix.
Awesome, thank you @MarkDaoust @piresramon
Unfortunately that's not helping , I added string inside the array and its still return without spaces, when therei s more than 1 pairs of chats . For time being I swtiched to chat bison model, and vertex AI, and its helping me get past this problem.
Hi @meystari. I tried with a 6-turn conversation, and it's working for me. Below is the code I am using:
model = genai.GenerativeModel('gemini-pro')
chistory = [
{'role': 'user', 'parts': ['Hi']},
{'role': 'model', 'parts': ['Hi there, how are you doing today?']},
{'role': 'user', 'parts': ["I'm doing well, thank you. I wanted to ask you about artificial intelligence."]},
{'role': 'model', 'parts': ["Sure, I'd be happy to discuss AI with you. What specifically would you like to know?"]},
{'role': 'user', 'parts': ["I'm curious about the current applications of AI in various industries."]},
{'role': 'model', 'parts': ["AI is being used across many sectors such as healthcare for diagnostics, finance for fraud detection, and even in agriculture for crop monitoring and optimization. It\'s quite versatile."]},
{'role': 'user', 'parts': ["That sounds fascinating! What about the ethical considerations surrounding AI?"]},
{'role': 'model', 'parts': ["Ethical considerations in AI are critical. They include issues like bias in algorithms, job displacement, and the potential for misuse of AI-powered technologies."]},
{'role': 'user', 'parts': ["How do we address these ethical concerns effectively?"]},
{'role': 'model', 'parts': ["Addressing ethical concerns requires collaboration between technologists, policymakers, ethicists, and society at large. Transparency, accountability, and inclusive design are crucial in developing ethical AI systems."]}
]
chat = model.start_chat(history=chistory)
response = chat.send_message("What was the first answer you gave me at the beginning of the conversation?")
response.text
# 'My first response was "Hi there, how are you doing today?"'
Just for reference, I am using the version '0.3.2'.
Description of the bug:
sometime's the gemini-pro resoponse is not readable, for example:
ofscrollingscrolling,makingitcompatiblewithvariousdevicesandinputmethods.\nExampleUsage:\n
\n//Assuming"dom"isaDOMelementand"card"isanobjectwitha"value"propertyrepresentingthecontainer.\ncheckAndScrollToDiv(dom,card.value);\n
\nInthiscontext,whenyoucallcheckAndScrollToDiv(dom,card.value)
,thefunctionwillcheckifthespecifiedDOMelement(dom
)ispositionedoutofthevisibleareawithinthecontainer(card.value
)and,ifso,scrollsthecontainertobringtheelementopintoviewatthetop.\nConclusion:\nThecheckAndScrollToDiv
functionplaysacrucialroleinimprovinguserexperienceandaccessibilitybyensuringthatelementsarevisiblewithinthecontainer.Itscompatibilitywithvariousdevicesandinputmethodsmakesitavaluabletoolforcreatinguser-friendlyapplications.this is stream mode, and use: print(repr(resp.text)) there was no space between words, and not readable, even i add a note at the last of my question:
question += 'please return a formatted and human readable answer.'
but sometimes, i asked gemini-pro to return a readable string again, it works, sometimes not word.
Actual vs expected behavior:
actual: ihaveaquesiton expected response; i have a qeustion
Any other information you'd like to share?
python 3.9 google-ai-generativelanguage 0.4.0 google-api-core 2.15.0 google-auth 2.25.2 google-generativeai 0.3.2 googleapis-common-protos 1.62.0