Hi, I noticed that in the model card. It says Adam optimizer is used.
However, in the config_lora.yaml file, it uses optim: rmsprop. Could you tell me which one is the actual training configuration?
I don't if there are other hyperparameters I didn't notice. Can you align the scripts with the correct model training configuration, please?
Hi, I noticed that in the model card. It says Adam optimizer is used. However, in the
config_lora.yaml
file, it usesoptim: rmsprop
. Could you tell me which one is the actual training configuration?I don't if there are other hyperparameters I didn't notice. Can you align the scripts with the correct model training configuration, please?