I have been trying to test this but the application is unable to load the model throwing out the following error. I am using quantised lora model from gpt4all repo for testing on a wsl distribution on windows 11 with a .txt data in data folder and the model in the model folder. I am using the following command
Model Path= models/gpt4all/gpt4all-lora-quantized.bin
llama.cpp: loading model from models/gpt4all/gpt4all-lora-quantized.bin
error loading model: llama.cpp: tensor '�~5��x�{�d�HuV' should not be 131072-dimensional
llama_init_from_file: failed to load model
I have been trying to test this but the application is unable to load the model throwing out the following error. I am using quantised lora model from gpt4all repo for testing on a wsl distribution on windows 11 with a .txt data in data folder and the model in the model folder. I am using the following command
It throws out following error
Any suggestions?