eric-ai-lab / PEViT

Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
MIT License
96 stars 5 forks source link

Questions about KAdaptation implementation #6

Open vishaal27 opened 1 year ago

vishaal27 commented 1 year ago

Hi, thanks for the great work and releasing the code to reproduce it.

I have a few questions regarding the kronecker adaptation forward pass through the adapter modules:

(1) The scaling factor you use for the KAdaptation is 1/5 times the scaling used in standard LoRA: https://github.com/eric-ai-lab/PEViT/blob/be6fb43ff54adeeffe720c663dd238976070558e/vision_benchmark/evaluation/model.py#L564 Is there a justification for this or is it simply an empirical magic number?

(2) While forwarding through your adapter for the value matrix, it seems like you reuse the query weight matrix (A as defined in the paper as I understand it). Is this a typo/bug? https://github.com/eric-ai-lab/PEViT/blob/be6fb43ff54adeeffe720c663dd238976070558e/vision_benchmark/evaluation/model.py#L571-L580 Shouldn't line 580 be H = kronecker_product_einsum_batched(phm_rule2, Wv).sum(0) instead?

jkooy commented 1 year ago

Hi, many thanks for interests! The scaling factor is a hyper-parameter, you can manually adjust it but from my experience it won't affect the performance much. For the value matrix, actually we share the same decomposition here so that's why reusing it.