bjzhb666 / GS-LoRA

Continual Forgetting for Pre-trained Vision Models (CVPR 2024)
https://arxiv.org/abs/2403.11530
MIT License
31 stars 1 forks source link

[ ReadMe: Object Detection Experiments ] #4

Open IemProg opened 3 months ago

IemProg commented 3 months ago

Hi @bjzhb666 ,

Thanks a lot for releasing the code.

I have been closely studying your implementation and have a couple of questions that I hope you can help clarify:

  1. Difference between train_own_forget_cl.py and train_own_forget.py: I noticed that there are two seemingly similar scripts in the repository: train_own_forget_cl.py and train_own_forget.py. Could you please elaborate on the specific differences between these two files? It would be helpful to understand their distinct purposes and when each script should be used.

  2. Reproducing Object Recognition Results with DETR: I am particularly interested in reproducing the object recognition results you achieved using DETR. Could you provide more detailed instructions or a guide on how to set up and execute the code for this task?

  3. Loss function: Why do you freeze the loss function parameters ?

Thank you once again for your impressive work and for any assistance you can provide.

bjzhb666 commented 3 months ago

Thanks for your attention to our work.

  1. They are essentially no different. train_own_forget.py is for single-step forgetting and train_own_forget_cl.py is for continual forgetting. But you can use train_own_forget_cl.py to conduct single-step forgetting. We use train_own_forget.py to conduct more ablation studies in single-step forgetting. If you do not want to see the details of the ablation study, just use train_own_forget_cl.py.
  2. We did not release the code for Object Detection. (actually, we have not cleaned it) The procedure is similar. We use the official checkpoint provided by deformable DETR. We need to use the LoRA module to modify the network structure and add our GS loss, Knowledge Retention Loss, and Selective Forgetting Loss.
  3. Actually, the loss function parameters are the FFN module (classifier). The name comes from our code base Face Transformer and is somewhat confusing. Please refer to our paper for more details (e.g. Sec.6.1) about why we freeze the FFN layers.