Closed technillogue closed 10 months ago
We get different results with peft vs exllama lora -- with exllama the finetuning tasks don't seem to be respected. Is there anything that needs to be done differently from example_lora.py for subsequent predictions to still use the lora?
turns out we had been setting .lora = lora on the wrong object
We get different results with peft vs exllama lora -- with exllama the finetuning tasks don't seem to be respected. Is there anything that needs to be done differently from example_lora.py for subsequent predictions to still use the lora?