acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
522 stars 59 forks source link

Is it possible to add a entity for reloading the service? #18

Closed Gaffers2277 closed 6 months ago

Gaffers2277 commented 6 months ago

I have sucessfully intergrated my text-generation-webui into my home assistant, im actually using the dolphin-2.2.1-AshhLimaRP-Mistral-7B-GGUF as i really just use it for the AI chat bot features and don't care for now about intergrating it into my home for the moment.

So i have been playing around with Home-LLM intergration but i have to reload it to send the reconfiguration to the text-generation-webui every time i change the system prompt, i also turn of the text-generation-webui when i dont need it but i have to reload the intergration to get the model to load. Is there a way you could add a entity so i can make a toggle switch on my HA dashboard to reload the service.

Love the intergration it works great and now i have a fully local chat bot with TTS/STT and wake words thanks to this intergration.

Anto79-ops commented 6 months ago

I just switched to LocalAI for this as backend. I found the text-generation-webui a bit glitchy, i.e, like sometimes it unloads the model or template for some reason so I have to go back to the webui and re-click it.

Gaffers2277 commented 6 months ago

I just switched to LocalAI for this as backend. I found the text-generation-webui a bit glitchy, i.e, like sometimes it unloads the model or template for some reason so I have to go back to the webui and re-click it.

Yeah i was following a video guide and he used localAI but i think it can only run on linux as far as i can tell. My AI is running on my 3080 when im not working so it for now has to be on windows. I for now, have it working pretty well, just more curious how each config was implemented. I can speak to to the AI and get a responds in just a few seconds which is amazing on just my local machine. Pretty sure my model isnt compatible with openai functions but for now i dont really mind, i have like 3 smart lights. i mainly want the assist pipeline until i get around to writing a custom pipeline.

acon96 commented 6 months ago

This should be fixed in v0.2.1