danielgross / localpilot

MIT License
3.32k stars 141 forks source link

Adding compatibility with LLM Studio model structure #27

Open tg1482 opened 6 months ago

tg1482 commented 6 months ago

I've noticed on reddit, twitter, etc that a lot of people have started using local LLMs running on LLMStudio and others. I have a PR ready to change the file structure for how we save and upload locally saved models so one can directly use the models that are downloaded from LLMStudio rather than having to re-install it for localpilot.

Would appreciate if someone could grant me write access so I can make a PR in this repo.

Appreciate your work on localpilot!

limdingwen commented 5 months ago

Correct me if I'm wrong but you don't need write access to make a PR.

Fusseldieb commented 1 week ago

LLMStudio being supported would be nuts! I'm actually also interested in this...

~Who knows, maybe I'll do it, but no guarantees. I'm quite busy already...~


EDIT: The debug options to set a local proxy on Copilot are outdated and don't work anymore. On that note, I tried out "Continue" from TogetherAI and their VSCode extension, downloaded LM Studio, loaded the model deepseek-coder-6.7b-base.Q3_K_S and it's been impressive so far. Basically Copilot speeds on my RTX2080 (Notebook GPU).

Have fun! Just make sure you have at least 8GB VRAM to offload the whole model (must be adjusted in LM Studio!) to have similar speeds.


EDIT2: Upon further testing in an actual work environment, well, the small 6.7B models aren't quite intelligent, to say the least... When working with libraries, for example, it has absolutely no clue what to do and just suggests "stuff" that doesn't even make sense in that particular context, even with 1024 token context and a Q5 model. However, that has to do with the model. A bigger or newer one will maybe do better... Only time will tell.