Allen0307 / AdapterBias

Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
18 stars 0 forks source link

a new issue #6

Open xuguangyi1999 opened 1 year ago

xuguangyi1999 commented 1 year ago

Why your AdapterBias is added after the second feed-forward layer. Why not add it before the second feed-forward layer. 为什么你们的AdapterBias是添加到第二个前馈层之后。为什么不是添加到第二个前馈层之前。