Allen0307 / AdapterBias

Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
18 stars 0 forks source link

L0 regularization in AdapterBias #7

Open hhh123-1 opened 1 year ago

hhh123-1 commented 1 year ago

Hello, I can't understand your L0 regularization in AdapterBias section after reading the paper, and I can't find the code. Could you please send me the code of L0 regularization in AdapterBias?

xuguangyi1999 commented 1 year ago

我也想知道

Allen0307 commented 1 year ago

Hi, Our implementation of L0 regularization is the same as diff-pruning(https://arxiv.org/abs/2012.07463). The code could be found in here(https://github.com/dguo98/DiffPruning/blob/main/examples/run_glue_diffpruning.py). However, the performance of utilizing L0 regularization on AdapterBias did not work well. We recommend directly fine-tuning on AdapterBias.