Closed radhacr closed 8 months ago
Hey, regarding this lora_modules_to_save: embed_tokens, lm_head
error. I might've replied to you or someone else in discord.
Could you try like a list as shown in Readme?
Hey, regarding this
lora_modules_to_save: embed_tokens, lm_head
error. I might've replied to you or someone else in discord.Could you try like a list as shown in Readme?
Thanks this worked.
Marking this issue as resolved
Please check that this issue hasn't been reported before.
Expected Behavior
I'm testing out the falcon-7B finetuning example with the config file
examples/falcon/config-7b-qlora.yml
as is.Current behaviour
As suggested in the README, I ran the command line
accelerate launch -m axolotl.cli.train examples/falcon/config-7b-qlora.yml
It first errors out with the following error:
After unsetting early_stopping_patience as
early_stopping_patience:
this is the errorFinally after setting the
lora_modules_to_save
aslora_modules_to_save: embed_tokens, lm_head
, this is the error:Steps to reproduce
I ran the command
accelerate launch -m axolotl.cli.train examples/falcon/config-7b-qlora.yml
with the changes to the yaml as described above
Config yaml
This is the final config-7b-qlora.yaml which results in the last error
Possible solution
No response
Which Operating Systems are you using?
Python Version
3.10.12
axolotl branch-commit
main v0.3.0
Acknowledgements