jlonge4 / local_llama

This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
Apache License 2.0
240 stars 39 forks source link

Awsome project! #13

Open mountainrocky opened 11 months ago

mountainrocky commented 11 months ago

This is an awsome project. I pull the code and get it up running quickly.

Do you have any idea how to improve the query results from my uploaded documents? or how to fine tune the LLM based on my updated documents? Any suggestions to improve the search speed?

Thanks Kevin

jlonge4 commented 11 months ago

@mountainrocky thanks a lot I'm glad you like it! I have been wanting to try to run inference using mojo for speed, but that's quite a dependency. As far as fine tuning you would have to actually use hugging face for that or get into some nitty gritty coding.

jlonge4 commented 9 months ago

This is an awsome project. I pull the code and get it up running quickly.

Do you have any idea how to improve the query results from my uploaded documents? or how to fine tune the LLM based on my updated documents? Any suggestions to improve the search speed?

Thanks

Kevin

@mountainrocky Checkout v3, and follow the new update instructions in the read me. You won't be disappointed.