I'm working with the T5-small model for text summarization. I fine-tuned it and also trained the adapter for the same dataset on the same machine and with the same configuration.
I was expecting lesser time for adapter training as it is not changing all the parameters of the model. But surprisingly, It took same time as fine-tuning the model. I used that run_summarization for adapter training.
Can you help why it took the same time! or am I missing something?
Hi @Darshan2104, I am new to Adapters as well and experienced the same thing. As far as I can remember, a speed-up of training time by around x3 should be possible. I have no idea why this is not the case
Hello Adapter-hub team,
I'm working with the T5-small model for text summarization. I fine-tuned it and also trained the adapter for the same dataset on the same machine and with the same configuration.
I was expecting lesser time for adapter training as it is not changing all the parameters of the model. But surprisingly, It took same time as fine-tuning the model. I used that run_summarization for adapter training.
Can you help why it took the same time! or am I missing something?
Looking for a solution!
Thanks Darshan Tank