c0sogi / LLMChat

A full-stack Webui implementation of Large Language model, such as ChatGPT or LLaMA.
MIT License
257 stars 45 forks source link

Referencing Models #22

Closed DrewPear309 closed 1 year ago

DrewPear309 commented 1 year ago

In llms.py your example model reference location is model_path="./llama_models/ggml/Wizard-Vicuna-7B-Uncensored.ggmlv2.q4_1.bin". This leads me to believe the model should be at LLMChat/app/llama_models/ggml/Wizard-Vicuna-7B-Uncensored.ggmlv2.q4_1.bin, but the app is not able to load the model. i created the llama_models and ggml directories as there were none. Thank you

c0sogi commented 1 year ago

I created a directory called llm_models, but it doesn't seem to be uploaded to the repository because of *.txt' in.gitignore`. Thanks for pointing it out.

DrewPear309 commented 1 year ago

Am i correct in assuming the example paths you give in llms.py are at, LLMChat/app/llama_models/ggml/model.bin

c0sogi commented 1 year ago

No. '.' means current working directory, which is 'LLMChat'.

So the appropriate path is 'LLMChat/llama_models/ggml/model.bin'

You must download GGML quantized bin file and put it in the path.

DrewPear309 commented 1 year ago

No. '.' means current working directory, which is 'LLMChat'.

So the appropriate path is 'LLMChat/llama_models/ggml/model.bin'

You must download GGML quantized bin file and put it in the path.

Thank you. I'm a newb and didn't understand the syntax.