TheBlewish / Web-LLM-Assistant-Llamacpp-Ollama

A Python-based web-assisted large language model (LLM) search assistant using Llama.cpp
MIT License
269 stars 28 forks source link

Need some help with this weird bug! relating to the LLM refusing to answer opting to make a strange remark instead! #4

Closed TheBlewish closed 2 months ago

TheBlewish commented 2 months ago

Hi anyone who sees this, I desperately need some help with this problem there is this weird issue where the LLM with often refuse to answer after performing a search instead stating: "🤖 Assistant: Based on the absence of selected results and the overall content. " I have no idea what causes this, it's consistent across different LLM models, no where in the code does this statement appear at all or even to my recollection similar statements, I have tried various prompt changes and fixes but I can't resolve this issue so far. I would really really REALLY appreciate if someone could lend me a hand in resolving this weird issue!

TheBlewish commented 2 months ago

I have just updated the program, this issue is now fully fixed, the parsing of the final response was causing the issue, this bug is now resolved please let me know if anyone experiences other issues!

ali0une commented 2 months ago

Was about to post this. Tested changing user agent, context size, layers offloaded and models ... still had this issue. Glad you fixed it.

TheBlewish commented 2 months ago

Was about to post this. Tested changing user agent, context size, layers offloaded and models ... still had this issue. Glad you fixed it.

Yeah thanks it took a little bit for me to figure it out, I think it had to do with the parsing, so if you've updated it, is the issue now fixed for you too?

ali0une commented 2 months ago

Yes, just tested and it works again!

TheBlewish commented 2 months ago

fantastic i'm glad I fixed it for you!