Closed c4m3r0nn closed 1 year ago
Same for me using ookabooga locally. I connect to telegram, can see the bot, chose characters etc, but no answer is generated, All I see in terminal is "generate_answer 'mirostat_mode'" everytime an answer should be generated. Tried everything but Am not able to solve the issue.
I just aligned text generator in https://github.com/innightwolfsleep/text-generation-webui-telegram_bot/pull/42 to last oobabooga version. But I cant check it now. If one of you can check changes - please write here result.
Can confirm working for me now :) thank you for helping and making this git
Sorry, not working for me. Still getting the "generate_answer 'mirostat_mode'" in terminal and nothing in telegrm
Also, I wonder why I get the "generate_answer 'mirostat_mode'" only for the Telegram extension and not when using the model through ookabooga UI.
check it now
Not really. No more "generate_answer 'mirostat_mode'" and an answer is displayed in telegram. But it's a long repeating of my prompt and empty answer from bot, like: You: Hi Bot: You: hi Bot: You: hi Bot ...and so on
UPD. https://github.com/innightwolfsleep/text-generation-webui-telegram_bot/commit/43a465fdf6d3de258799c55bbb4177bac76a2ff0 Just updated ooba and checked. It works with llama4bit model, as i can see.
I was using "ggml-vic7b-uncensored-q5_1".
I am rather new to the game.
I'll do some tests and keep you posted. Thanks for your efforts.
@stfzz do you update the extension? I'm run cmd_windows.bat
cd text-generation-webui
cd extensions
cd telegram_bot
git pull
and all worked for me: ggml v3 8bit 13B, gptq 7B, 13B
@Bahamut-ru I am using conda environment and updated all extensions. Now I get text in telegram, but it is messed up, with some of the characters talking to each other, and the model answering to itself. It's likely something I am missing regarding the settings, I guess [?]. Using the "ggml-vic7b-uncensored-q5_1" model but not getting how to set up the thing. Basically, I am not able to get sort of dialog. The models starts and generates text, likely until the max tokens are reached. Thanks for any help
Maybe someone can share the used settings for the telegram extension and ookabooga itself?
Just made a new install by using the installers but get the same results.
It seems there is no way to make this work for me. Maybe better to wait until it is out of beta.
Really wonder how you guys managed to make this work.
Can you share screen of input+generated text and link to model? If you got text generation until limit - perhaps need add certain stopping strings to code. Some models sensitive to text (additional spare char can break all dialog).
My guess is it is an eos_token issue. Trying to solve it...
You can add stopping strings in code - TelegramBotGenerator - get_answer() - stopping_strings.append(r"...")
Thanks. I was trying to use telegram_config.cfg for this. Will try modifying code.
Would that look like this:
stopping_strings.append(r"\nYou:")
?
Btw, now it s again: Bot: You: Hi Bot: You: Hi Bot: You: Hi ...and so on
I am realy dont know what exactly wrong. I use ordinary settings - llama-7b-4bit and "NovelAI-Sphinx Moth" preset and it work fine. Vicuna have specific syntax and thats may be reason why it work not properly. If bot send empty answer - this means problem happend when it send "question" to text-generation-webui generate_reply function and it cant answer. When i got similiar problem, i try to debug what generate_reply do. I cant get same result as you so i cant help. If you can share link to your model i can try to repeat.
Hi. Thanks for pointing out some directions. Will investigate further. I was using "ggml-vic7b-uncensored-q5_1" from this repo: https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main
I am using 65B model it working absolutely fine without problem Some models need specific stopping strings
Is this the right way to add stopping strings to TelegramBotGenerator.py?
stopping_strings.append(r"\nHuman:")
Is this the right way to add stopping strings to TelegramBotGenerator.py? stopping_strings.append(r"\nHuman:") Right.
If wont work, try: stopping_strings.append("Human:")
Already tried :-)
I made custom character and got result Answerer_vic7b.zip
Back to generating text, pretty good indeed. Just can't figure out how to make it stop chat with itself. None of these seems to work: stopping_strings.append(r"\nHUMAN::") stopping_strings.append("HUMAN::") stopping_strings.append(r"\nHuman:") stopping_strings.append(r"\n### Human:") stopping_strings.append("### Human:") stopping_strings.append("Human:") stopping_strings.append(r"\nHUMAN:") stopping_strings.append(r"\n### HUMAN:") stopping_strings.append("### HUMAN:")
to my todo list: move stopping_strings and eos_token to telegram_config.cfg ))))))
Added.
Thanks a lot for your help! Managed to make it work, also thanks to your suggestions.
I am using the Telegram bot with the Oobabooga Google Colab.
I am able to connect however every message is empty and the terminal says generate_answer ‘epsilon_cutoff’
any ideas on a fix for this?