Open anshumantanwar opened 1 year ago
If you are getting exactly the same results it might be that you are not loading the correct parameters when doing inference and that you are actually just using the same base model without adapters.
do you solve the problem? I just meet the same problem. the loss of eval_data_set is very low, but I got the same result, when I use the alpaca generate.py script to infer the lora result.
I tried to use qlora on dolly 2.0 3B for specs identification using dataset of 2000 items. Although qlora finetuning and inferencing is not giving any error but results are exactly same as original dolly 3b model. Is it common behavior?