adymaharana / d2pruning

MIT License
22 stars 4 forks source link

Score function used in d2pruning #1

Open hamedbehzadi opened 5 months ago

hamedbehzadi commented 5 months ago

Dear Maharana

I hope this message finds you well and thanks for sharing the details of your work. Kindly, I have a question from the paper. In the sec. 2.2 Example Difficulty you have had a good overview on different score functions used in the other paper. Also, from the shared code I noticed you collect training dynamics introduced in other papers. However, I was not able to figure out the score function you used. Is it a combination of different training dynamics such as logits, probability,etc? I would be appreciate if you could help me out on this.

Thanks for your attention. Hamed

adymaharana commented 5 months ago

Hi Hamed,

Thank you for your interest in our work! Our graph-based framework is not specific to any single score function and can work with any score function. However, for our experiments, we followed the score functions recommended by Zheng et al. 2022. See under 'Implementation' in Sec. 4. We use the forgetting score for all image datasets except ImageNet; for ImageNet, we use the AUM score. For NLP datasets, we use the variance score. For DataComp, we use the CLIP score as score function. Let me know if you have any additional or follow-up questions.