eric-ai-lab / PEViT

Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
MIT License
94 stars 5 forks source link

Question about LoRA alpha #5

Open vishaal27 opened 1 year ago

vishaal27 commented 1 year ago

Hi, thanks for your great work. I noticed that in your scripts, you hard-coded the lora alpha to be 128 and the rank r to be 4 (therefore leading to a scaling factor of 32): https://github.com/eric-ai-lab/PEViT/blob/be6fb43ff54adeeffe720c663dd238976070558e/vision_benchmark/evaluation/lora_model.py#L455-L463 Was there a principled justification for these choices? I am just wondering if you did any tuning on these values to suggest what would be good values to use.

jkooy commented 12 months ago

Hi, thanks for the interests! The setting is inherited from LoRA's official development code: https://github.com/microsoft/LoRA/tree/snapshot-9-15-2021 https://github.com/microsoft/LoRA/blob/snapshot-9-15-2021/src/model.py