Howdy, I was following the steps in your recent blog post to get set up on a new machine, and I failed at the last step with:
> llm -m gguf -o path mixtral-8x7b-v0.1.Q4_K_M.gguf '[INST] Write a Python function that downloads a file from a URL[/INST]'
Error: 'gguf' is not a known model
Here's a quick PR to fix it 😊 It looks like LlamaGGUF should be registered in all cases, and the existence check for models.json is only relevant for the model-registration logic.
Thank you for such an awesome tool, glad to be able to contribute in a small way!
Howdy, I was following the steps in your recent blog post to get set up on a new machine, and I failed at the last step with:
Here's a quick PR to fix it 😊 It looks like
LlamaGGUF
should be registered in all cases, and the existence check formodels.json
is only relevant for the model-registration logic.Thank you for such an awesome tool, glad to be able to contribute in a small way!