innightwolfsleep / text-generation-webui-telegram_bot

LLM telegram bot
MIT License
105 stars 20 forks source link

epsilon_cutoff #41

Closed c4m3r0nn closed 1 year ago

c4m3r0nn commented 1 year ago

I am using the Telegram bot with the Oobabooga Google Colab.

I am able to connect however every message is empty and the terminal says generate_answer ‘epsilon_cutoff’

any ideas on a fix for this?

stfzz commented 1 year ago

Same for me using ookabooga locally. I connect to telegram, can see the bot, chose characters etc, but no answer is generated, All I see in terminal is "generate_answer 'mirostat_mode'" everytime an answer should be generated. Tried everything but Am not able to solve the issue.

innightwolfsleep commented 1 year ago

I just aligned text generator in https://github.com/innightwolfsleep/text-generation-webui-telegram_bot/pull/42 to last oobabooga version. But I cant check it now. If one of you can check changes - please write here result.

c4m3r0nn commented 1 year ago

Can confirm working for me now :) thank you for helping and making this git

stfzz commented 1 year ago

Sorry, not working for me. Still getting the "generate_answer 'mirostat_mode'" in terminal and nothing in telegrm

stfzz commented 1 year ago

Also, I wonder why I get the "generate_answer 'mirostat_mode'" only for the Telegram extension and not when using the model through ookabooga UI.

innightwolfsleep commented 1 year ago

check it now

stfzz commented 1 year ago

Not really. No more "generate_answer 'mirostat_mode'" and an answer is displayed in telegram. But it's a long repeating of my prompt and empty answer from bot, like: You: Hi Bot: You: hi Bot: You: hi Bot ...and so on

innightwolfsleep commented 1 year ago

UPD. https://github.com/innightwolfsleep/text-generation-webui-telegram_bot/commit/43a465fdf6d3de258799c55bbb4177bac76a2ff0 Just updated ooba and checked. It works with llama4bit model, as i can see.

stfzz commented 1 year ago

I was using "ggml-vic7b-uncensored-q5_1".

I am rather new to the game.

I'll do some tests and keep you posted. Thanks for your efforts.

BahamutRU commented 1 year ago

@stfzz do you update the extension? I'm run cmd_windows.bat

cd text-generation-webui
cd extensions
cd telegram_bot
git pull

and all worked for me: ggml v3 8bit 13B, gptq 7B, 13B

stfzz commented 1 year ago

@Bahamut-ru I am using conda environment and updated all extensions. Now I get text in telegram, but it is messed up, with some of the characters talking to each other, and the model answering to itself. It's likely something I am missing regarding the settings, I guess [?]. Using the "ggml-vic7b-uncensored-q5_1" model but not getting how to set up the thing. Basically, I am not able to get sort of dialog. The models starts and generates text, likely until the max tokens are reached. Thanks for any help

stfzz commented 1 year ago

Maybe someone can share the used settings for the telegram extension and ookabooga itself?

Just made a new install by using the installers but get the same results.

It seems there is no way to make this work for me. Maybe better to wait until it is out of beta.

Really wonder how you guys managed to make this work.

innightwolfsleep commented 1 year ago

Can you share screen of input+generated text and link to model? If you got text generation until limit - perhaps need add certain stopping strings to code. Some models sensitive to text (additional spare char can break all dialog).

stfzz commented 1 year ago

My guess is it is an eos_token issue. Trying to solve it...

innightwolfsleep commented 1 year ago

You can add stopping strings in code - TelegramBotGenerator - get_answer() - stopping_strings.append(r"...")

stfzz commented 1 year ago

Thanks. I was trying to use telegram_config.cfg for this. Will try modifying code.

innightwolfsleep commented 1 year ago
stfzz commented 1 year ago

Would that look like this:

stopping_strings.append(r"\nYou:")

?

Btw, now it s again: Bot: You: Hi Bot: You: Hi Bot: You: Hi ...and so on

innightwolfsleep commented 1 year ago

I am realy dont know what exactly wrong. I use ordinary settings - llama-7b-4bit and "NovelAI-Sphinx Moth" preset and it work fine. Vicuna have specific syntax and thats may be reason why it work not properly. If bot send empty answer - this means problem happend when it send "question" to text-generation-webui generate_reply function and it cant answer. When i got similiar problem, i try to debug what generate_reply do. I cant get same result as you so i cant help. If you can share link to your model i can try to repeat.

stfzz commented 1 year ago

Hi. Thanks for pointing out some directions. Will investigate further. I was using "ggml-vic7b-uncensored-q5_1" from this repo: https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main

TruthSearchers commented 1 year ago

I am using 65B model it working absolutely fine without problem Some models need specific stopping strings

stfzz commented 1 year ago

Is this the right way to add stopping strings to TelegramBotGenerator.py?

stopping_strings.append(r"\nHuman:")

innightwolfsleep commented 1 year ago

Is this the right way to add stopping strings to TelegramBotGenerator.py? stopping_strings.append(r"\nHuman:") Right.

If wont work, try: stopping_strings.append("Human:")

stfzz commented 1 year ago

Already tried :-)

innightwolfsleep commented 1 year ago

I made custom character and got result Answerer_vic7b.zip

2

stfzz commented 1 year ago

Back to generating text, pretty good indeed. Just can't figure out how to make it stop chat with itself. None of these seems to work: stopping_strings.append(r"\nHUMAN::") stopping_strings.append("HUMAN::") stopping_strings.append(r"\nHuman:") stopping_strings.append(r"\n### Human:") stopping_strings.append("### Human:") stopping_strings.append("Human:") stopping_strings.append(r"\nHUMAN:") stopping_strings.append(r"\n### HUMAN:") stopping_strings.append("### HUMAN:")

innightwolfsleep commented 1 year ago

to my todo list: move stopping_strings and eos_token to telegram_config.cfg ))))))

Added.

stfzz commented 1 year ago

Thanks a lot for your help! Managed to make it work, also thanks to your suggestions.