Open ChaoGaoUCR opened 11 months ago
Hi,
Yes, you can evaluate the original models by commenting Line 222-227 in evaluate.py
. And if the model you wanna use has already been supported, you can just indicate the argument --base_model
. If not, then you need to indicate the argument --target_modules
or add the mapping of the model to target modules in LLM-Adapters/peft/src/peft/mapping.py
.
If you have any questions on add unsupported models to the code base, please let us know and we will help with it!
Thanks!
Thank you so much! I will try this out😃
Dear Author,
Thanks for your great projects. I was trying to evaluate the model without Tuning and with Tuning. I wondered if we can evaluate the model with the original model. Also, if I want to use models except LLAMA Bloom and GPT-J, do I have to write my own part?
Thanks