Closed binarybeastt closed 1 month ago
Could you try inferencing with the "old model"? Like load the model with old_model
instead of model_id
? If it raises the same error then it might be a versioning issue of the transformers library (e.g., see https://github.com/huggingface/transformers/issues/29426). If the old model works but finetuned model doesn't then there might be something wrong with lmms-finetune and I will take a look.
it was a versioning issue of the transformers library, I only had to perform an upgrade.
I'm getting errors after trying to perform inference on an interleave model I fine-tuned using LoRA quantization Here's the code:
Here's the error with traceback