acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
490 stars 56 forks source link

Can not install local model #65

Closed juan11perez closed 4 months ago

juan11perez commented 4 months ago

Installed the integration and manually downloaded/Installed the Local Model.

When trying to add the integration to homeassistant and selecting 'local model' I get this error:

Pip returned an error while installing the wheel

Thank you

pbn42 commented 4 months ago

Same error here : 2024-02-16 16:36:24.605 ERROR (SyncWorker_9) [homeassistant.util.package] Unable to install package /config/custom_components/llama_conversation/llama_cpp_python-0.2.42-cp311-cp311-musllinux_1_2_x86_64.whl: ERROR: llama_cpp_python-0.2.42-cp311-cp311-musllinux_1_2_x86_64.whl is not a supported wheel on this platform. 2024-02-16 16:36:24.606 WARNING (MainThread) [custom_components.llama_conversation.config_flow] Failed to install wheel: False

I'm on a NUC (Intel Celeron N5095) running a VM in proxmox, where HAOS is installed. Both the wheels from https://github.com/acon96/home-llm/tree/develop/dist are in the custom_components/llama_conversation/ folder.

Any way to create a custom wheel localy ?

Thanks for your amazing work !

acon96 commented 4 months ago

Home Assistant 2024.2.1 updated from Python 3.11 to Python 3.12. I have published new wheels that are compatible with 3.12 in the /dist folder.

@pbn42 building your own wheel can be done via the /dist/run_docker.sh script with docker installed. It builds the wheel inside of the HA core image to ensure compatibility at runtime.

juan11perez commented 4 months ago

@acon96 Thank you. What do i type in the Local File Name box in configflow?

acon96 commented 4 months ago

Thank you. What do i type in the Local File Name box in configflow?

@juan11perez It should be wherever you placed the model file on the Home Assistant filesystem.

I put mine in the /config/models folder that I made so the path would be /config/models/home-3b-v2.q5_k_m.gguf (replace with the quant level you downloaded).

Alternatively, if you choose the Llama.cpp (Huggingface) backend it will ask for a Repo ID and quantization level instead and will download the model for you.

juan11perez commented 4 months ago

@acon96 thank you. I was able to install it with your instructions, but as soon as i invoked it, it crashed my server.

acon96 commented 4 months ago

The only thing I know that could cause that is llama.cpp failing to load the GBNF grammar file. It just hard segfaults the entire Home Assistant process. I actually need to figure that one out too since that's not a good user experience.

juan11perez commented 4 months ago

thank you again @acon96

acon96 commented 4 months ago

welcome. closing as finished