acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
483 stars 56 forks source link

text-generation-webui API chat completion broken (selects ooba assistant, ignored sysprompt and template) #139

Open StefanDanielSchwarz opened 2 months ago

StefanDanielSchwarz commented 2 months ago

Describe the bug
text-generation-webui API chat completion broke completely after I tried using a "Generation Preset/Character Name" of my own. Even after removing it, it's not working anymore, even after deleting and re-adding the services.

For some reason, it now always loads the default Assistant character and uses that without the home-llm system prompt and correct prompt template. I've spent about an hour hunting for where a configuration file or database entry might have changed and could be stuck on the old version, but I didn't find anything. Restarted, deleted, reconfigured ooba and home-llm, but it remains messed up.

Expected behavior
Just as before, when the "Generation Preset/Character Name" box is empty, don't use any of ooba's default assistants.

Logs
ooba's console log:

19:34:08-504205 INFO     PROMPT=
<BOS_TOKEN>The following is a conversation with an AI Large Language Model. The AI has been trained to answer questions, provide recommendations, and help with decision making. The AI follows user requests. The AI thinks outside the box.

You: HELP!
AI:

As you can see, that's the ooba default Assistant prompt, not the home-llm prompt. And there's no prompt template, although everything looks correct in the home-llm settings UI.

Update:

After almost 2 hours of troubleshooting, it's still broken despite me completely reinstalling ooba and home-llm from scratch! I don't think ooba has any secret config files or registry entries, so I wonder if home-llm has those and doesn't remove them when the integration is deleted?

Totally stumped right now, as everything worked perfectly before I touched that cursed "Generation Preset/Character Name" field. And now, no matter the model, as soon as I enable "Use chat completions endpoint", it forgets the system prompt and prompt template and uses ooba's default assistant without a prompt template.

Update 2:

Three hours later – still no luck. I'm trying to find out where home-llm stores its settings, searched the DB, searched the file system. Any pointers? The integrations I set up must be saved somewhere, after all...

Also, if I switch Chat Mode to "Instruct", I can enter an instruct prompt template's name, but it's ignored, instead home-llm's option "Prompt Format" is used – but not consistently (only ChatML, Alpaca, Mistral are applied, ALL the others result in Alpaca being used)!

StefanDanielSchwarz commented 1 month ago

PR for prompt templates for Command-R and Phi, which would make the text completion endpoint work with these models. Not a fix for this issue with the chat completion endpoint, but at least there would be an alternative.

I'll still continue looking for a fix for this, too...