acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
671 stars 70 forks source link

Problem talking to the backend #188

Open andreas-bulling opened 3 months ago

andreas-bulling commented 3 months ago

Installation went fine but I get the following error when trying to invoke the assistant:

Sorry, there was a problem talking to the backend: RuntimeError('llama_decode returned 1')

image

acon96 commented 3 months ago

Can you provide more information? What model were you using? How many entities do you have exposed? Were there any errors or warnings in the HA logs?

Teagan42 commented 2 months ago

llama_decode is a method in llama.cpp: https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.llama_cpp.llama_decode

Does your LLM model work if you call it directly (i.e. no Home-Assistant, use the CLI or API of the runner)?