bigscience-workshop / multilingual-modeling

BLOOM+1: Adapting BLOOM model to support a new unseen language
https://arxiv.org/abs/2212.09535
Apache License 2.0
69 stars 15 forks source link

Composable SFT #28

Open haileyschoelkopf opened 2 years ago

haileyschoelkopf commented 2 years ago

https://arxiv.org/pdf/2110.07560.pdf <-- Paper https://github.com/cambridgeltl/composable-sft <-- code

TODOs:

If we want to train both adapters and Composable SFT at once, this will require some extra code. Probably not TOO bad, but would need extra testing to account for freezing all correct parameters

haileyschoelkopf commented 2 years ago

Issue #23 is addressed by this.

EDITED: image Here for reference are loss curves (Sparse FT) compared to adapter test run! Loss curves look great. I am curious to see whether the less steep eval loss will imply lower downstream performance :)

haileyschoelkopf commented 2 years ago
06/27/2022 21:55:58 - INFO - __main__ - adapter elements: 3697664
06/27/2022 21:55:58 - INFO - __main__ - K value for SFT is 3670016.0

My calc for # tunable params is slightly off still. I'm missing something minor. (In file, I estimate # params for the model using pfeiffer+inv with adapter_reduction_factor.)

Yong, did you run calcs to find # of tunable adapter params, or just get that number by adding an adapter to the model?

haileyschoelkopf commented 2 years ago

@yongzx This is ready to merge, the only thing needing changing is the calc for # of parameters to finetune.

yongzx commented 2 years ago

Yong, did you run calcs to find # of tunable adapter params, or just get that number by adding an adapter to the model?

the only thing needing changing is the calc for # of parameters to fine-tune.

I can help do this, no worries! It's just a running sum of trainable parameters. I will review the code over the weekend.

yongzx commented 2 years ago

I just did a quick read of the commit. It seems like for SFT, we don't need to modify anything in the adapter-transformers?

haileyschoelkopf commented 2 years ago

Yep! Just

git clone https://github.com/cambridgeltl/composable-sft.git
cd composable-sft
pip install -e .

to install their code. Thanks!

haileyschoelkopf commented 2 years ago

Also

I can help do this, no worries! It's just a running sum of trainable parameters. I will review the code over the weekend.

what I meant by this was to set the number of parameters this method changes (it's fully configurable) such that that total was equivalent to using pfeiffer+inv adapters, not just counting # trainable params with this method.

yongzx commented 2 years ago

I will prioritize evaluation test suites over this for now, but I hope to finish reviewing this before our meeting this Friday.

haileyschoelkopf commented 2 years ago

No problem!

For reference,

git clone https://github.com/haileyschoelkopf/composable-sft.git
cd composable-sft
pip install -e .

Now to install the dependency, do this instead. I'm hoping to add a Random and FISH masking strategy in this.

yongzx commented 2 years ago

Refering to MLM training scripts, the training steps for both full-model and sparse finetuning seem to be equal. Since we are comparing sparse-finetuning to other adapters methods, we need to set both to be 25K steps.

yongzx commented 2 years ago

558f674 now supports composable SFT. Hailey, do you want to test it out?

haileyschoelkopf commented 2 years ago

Yes, let me try running this with those parameters!