[x] Serialize/deserialize optimizer config in lora_linear operator
[x] Extend current alignment test to check alignment after each optimizer step
[ ] Check alignment with TP degree > 1
[ ] Check alignment of Adam, AdamW optimizers
Minor:
[x] Add/debug code to set peft_optimizer_update flag properly (by default, set to 1 for each iteration, or set it to 1 every n iteration if user specifies gradient_accumulation_steps > 1 )
Description of changes:
TODOs:
Minor:
peft_optimizer_update
flag properly (by default, set to 1 for each iteration, or set it to 1 every n iteration if user specifies gradient_accumulation_steps > 1 )Related Issues:
Linked Issues:
Issues closed by this PR:
This change is![Reviewable](https://reviewable.io/review_button.svg)