QuangBK / localLLM_guidance

Local LLM ReAct Agent with Guidance
151 stars 24 forks source link

How to get the required model? #1

Closed ibehnam closed 1 year ago

ibehnam commented 1 year ago

I noticed you use GPTQ, but apparently it's incompatible with M1 Mac. Can you please generalize your project so that any model can be selected? Even the models that M1 Macs can run.

QuangBK commented 1 year ago

You can change my GPTQ with other models. Just change the model andtokenizer in my code with your model and tokenizer by modifying this line. The Guidance also has several examples of using local LLMs in its document.

If you want a GGML model, they're working to support llama-cpp-python from this. Currently, you may want to check this notebook from another fork.

ibehnam commented 1 year ago

Thank you so much!