Open GeorgePearse opened 1 year ago
@GeorgePearse Thank you for your attention. We have already conducted the corresponding experiments.
Thank you for considering this direction. Is there a tentative timeline for the release of these experiments?
Hey @hhaAndroid super excited to hear anything about this.
Hi, I replace the backbone of Faster R-CNN with the CLIP image encoder, and using lora to finetune it, detection head is randomly initialized, it turns out that mAP goes to 0, fully finetuning w/o lora works normal, wondering if you encounter the same issue.
hey @seanzhuh any chance you could send me the code you used / some tips to try using LORA, I'll be able to help you with debugging.
Is it as simple as finding the right package to add in the additional layers
Calling:
model = LORA(model)
Which would add the appropriate layers, and freeze the others.
In the right place, and then training as normal?
I've been trying both the PEFT and MinLora packages
I also saw that Lora is implemented in MMPretrain
any updates on this? or were the experiments not successful?
I'm also interested to know more about the results 👀
MMDetection includes both SWIN and DETR, if I understand the concept correctly, both could be fine-tuned with LORA in a fast and memory efficient manner.
Support for training with LORA in object detection is currently extremely limited (anywhere) and MMDetection could lead in this area.