innightwolfsleep / text-generation-webui-telegram_bot

LLM telegram bot
MIT License
106 stars 20 forks source link

limit? #198

Open cusiman opened 9 months ago

cusiman commented 9 months ago

The bot works but when the conversation passes 4000 tokens the bot becomes unstable in its responses I already changed these parameters: truncation_length chat_prompt_size, but still with the same problem

innightwolfsleep commented 9 months ago

Which model do you use? Except truncation_length chat_prompt_size there is no context length parameters. Perhaps it is model problem.

cusiman commented 9 months ago

I tried with wizard vicuna 30b and now with LLaMA2-13B-Psyfighter2 and I got the same problem, which uncensored model would be the ideal?

innightwolfsleep commented 9 months ago

Hard to say... As I know, default LLaMA2 support 4096 token length, but Llama2 forks support up to 16k tokens, so I dont know about LLaMA2-13B-Psyfighter2. I'll try to test it later, perhaps will find something.

cusiman commented 9 months ago

Thanks for your response, what model do you recommend?