I can confirm that the mixtral-8x7b-instruct-v0.1.Q6_K.gguf file is present, but it gives the same error message if I just run llm -m gguf, so that's not really related.
Hopefully I'm not missing something obvious. Let me know if I can be of help! I clicked around a bit hoping it might be an easy fix but couldn't find anything.
Hello!
I am trying to run the following command—
—referenced from the tutorial here.
I'm getting this output:
Error: 'gguf' is not a known model
, which I assume is coming fromcli.py
.Some more details:
pipx install llm
->llm install llm-llama-cpp
->CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 llm install llama-cpp-python
mixtral-8x7b-instruct-v0.1.Q6_K.gguf
file is present, but it gives the same error message if I just runllm -m gguf
, so that's not really related.Hopefully I'm not missing something obvious. Let me know if I can be of help! I clicked around a bit hoping it might be an easy fix but couldn't find anything.
– NA