ldzhangyx / instruct-MusicGen

The official implementation of our paper "Instruct-MusicGen: Unlocking Text-to-Music Editing for Music Language Models via Instruction Tuning".
Apache License 2.0
68 stars 3 forks source link

LORA finetuning deactivated? #8

Closed EladDvash closed 3 months ago

EladDvash commented 3 months ago

When looking at the code I noticed the LORA config is there but you do not initialize the PEFT model: Line 368 in the model.py file is commented out:

self.text_lora_config = peft.LoraConfig(target_modules=r".*\.cross_attention\.(q_proj|v_proj)",
                                                r=32,
                                                lora_alpha=64)
# self.peft_model.lm.transformer = peft.get_peft_model(self.peft_model.lm.transformer, self.text_lora_config)

and when you print out the trainable layers no LORA layers appear, is this by design or a mistake?

ldzhangyx commented 3 months ago

Thanks for this issue. It is because of the ablation study and I believe it should be uncommented. I have fixed this problem.