acon96 / home-llm

A Home Assistant integration & Model to control your smart home using a Local LLM
575 stars 62 forks source link

Support for lower parameter models #184

Open panikinator opened 1 month ago

panikinator commented 1 month ago

Have you tried experimenting with lower parameter models like flan t5, albert, bert etc or even qwen 0.5b? With fine tuning they might be able suffice in this specific domain? I have a low end machine and even tinyllama is kinda slow. I have tried tinkering around your existing codebase but i lack the skills as well as horsepower to do so

Hats off for working on this awesome project btw

acon96 commented 1 month ago

I did some experiments with Qwen2 0.5B and the results were quite impressive compared to models like Phi-2 from last year: https://github.com/acon96/home-llm/blob/feature/polish-dataset/docs/experiment-notes-qwen.md#tinyhome-qwen-rev3

I definitely think that this project shows a great use case for fine-tuning smaller models instead of relying on the zero shot performance of larger models (7B+) using in-context-learning examples.

I'll see if I can get some time in the next few weeks to re-run the training because that specific run I linked had an issue with not wanting to use the EOS token and rambling on about random stuff after turning on your lights (which while it can be hilarious, is not ideal)

panikinator commented 2 weeks ago

So huggingface released a new series of LLMs called smollm. I want to experiment with the 135M parameter one but I only have a GPU with 4 gigs of vram 🥲. With my limited knowledge, I tried fine tuning that model using lora on my gpu but ran into CUDA OOM errors. Have I got any chances of making it work?