The llama.cpp loader accepts a path for gguf models, but I was not able to find this option for models loaded with hf transfomers. I have quite a few models that I use with https://github.com/oobabooga/text-generation-webui/ and it's not ideal having to download them again just for lmql.
The llama.cpp loader accepts a path for gguf models, but I was not able to find this option for models loaded with hf transfomers. I have quite a few models that I use with https://github.com/oobabooga/text-generation-webui/ and it's not ideal having to download them again just for lmql.