adapter-hub / Hub

ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || 🔌 A central repository collecting pre-trained adapter modules
https://adapterhub.ml/explore
Apache License 2.0
68 stars 43 forks source link

Time! #42

Closed Darshan2104 closed 7 months ago

Darshan2104 commented 2 years ago

Hello Adapter-hub team,

I'm working with the T5-small model for text summarization. I fine-tuned it and also trained the adapter for the same dataset on the same machine and with the same configuration.

I was expecting lesser time for adapter training as it is not changing all the parameters of the model. But surprisingly, It took same time as fine-tuning the model. I used that run_summarization for adapter training.

Can you help why it took the same time! or am I missing something?

Looking for a solution!

Thanks Darshan Tank

NotNANtoN commented 2 years ago

Hi @Darshan2104, I am new to Adapters as well and experienced the same thing. As far as I can remember, a speed-up of training time by around x3 should be possible. I have no idea why this is not the case