acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
631 stars 64 forks source link

Build docker image for addon using GitHub Actions #93

Closed acon96 closed 6 months ago

acon96 commented 6 months ago

Please describe what you are trying to do with the component
Currently if you want to use the provided text-generation-webui addon, you need to build the docker image on the Home Assistant device. This can take a very long time and make it look like the device is frozen.

Describe the solution you'd like
We should provide pre-built docker images using GitHub actions to build and push the images to ghcr.io.

Additional context
Examples of publishing HA addons using GitHub Actions is here: https://github.com/home-assistant/addons-example/tree/main/.github/workflows

acon96 commented 6 months ago

closing since the addon is going to be deprecated.

Erudition commented 5 months ago

closing since the addon is going to be deprecated.

What exactly does this mean? Is there any changelog or document or post about this news?

Are you referring only to the text-generation component? What is the replacement that it's deprecated in favor of?

acon96 commented 5 months ago

closing since the addon is going to be deprecated.

What exactly does this mean? Is there any changelog or document or post about this news?

Are you referring only to the text-generation component? What is the replacement that it's deprecated in favor of?

The replacement is to set up text-generation-webui on another machine or if you want to run the model on the same machine as Home Assistant, then use llama.cpp backend.

Also, I'm not removing the addon, but it has issues that are avoided by running the model directly using Llama.cpp so I would like to steer new users towards that.