Open dlippold opened 6 months ago
I fixed this upstream in https://github.com/ggerganov/llama.cpp/pull/6139 which should make it into the next release of GPT4All (already included in #2310).
Version 2.8.0 crashes when loading the model named above.
Bug Report
The fine-tuned MPT model from https://huggingface.co/maddes8cht/mosaicml-mpt-7b-instruct-gguf/ in quantization Q4_1 was usabel in release 2.7.2 but not longer in 2.7.3 and later. In particular it is currently not usable.
When I try to load the model file I get the following error message:
The reason of the problem may have to do with #2006
Steps to Reproduce
Expected Behavior
The model file should be loaded.
Your Environment