google-research / adapter-bert

Apache License 2.0
483 stars 49 forks source link

In adapter-fine-tuning, why don't fix original params? #2

Closed zihaolucky closed 5 years ago

zihaolucky commented 5 years ago

Hi guys,

I have a glance at the run_classifier.py code and didn't see the code for fixing original transformer parameters, so it's full fine-tune setting, and why? Thanks~

ghost commented 5 years ago

@zihaolucky The restriction to training only parameters in the adapters, layer-norm, and head is made in the optimizer, using collections:

tvars = []
for collection in ["adapters", "layer_norm", "head"]:
  tvars += tf.get_collection(collection)
zihaolucky commented 5 years ago

@neil-houlsby Get it. Thank you!