Closed anastasiuspernat closed 2 months ago
Sorry, I never said that I support all GGUF models, I only tried to make the LVM Model Loader node compatible with the GGUF format of llava, and I don't have time to test whether other GGUF formats of llava can work, but the theory is OK. As for the GGUF format of this model you mentioned, it definitely doesn't work. You can try to convert this model into a format acceptable to ollama, and then use ollama to access it using the API.
In the description it says you support GGUF format, but when I try any .gguf model, for example
InferenceIllusionist/Mistral-Nemo-Instruct-12B-iMat-GGUF
it says "does not appear to have a file named config.json". But GGUF format features a single file, there's no config.json. I tried many ways, like downloading into local folder and specifying that as a path etc. - the error is the same, it tries to find config.json.So what's the best working way to load GGUF models? Thanks!
P.S. I'm using the node called "Large Language Model Loader".