neu-spiral / SparCL

SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22
https://arxiv.org/abs/2209.09476
MIT License
28 stars 5 forks source link

Calculation of FLOPs #2

Closed h1yc2 closed 4 months ago

h1yc2 commented 6 months ago

Hello, I am a student of continual Learning and I am very interested in your article SparCL: Sparse Continual Learning on the Edge. I saw that you mentioned that DGM method can reduce FLOPs, but I did not find the relevant calculation in your public code.May I ask how FLOPs are calculated in your experiment or how do you calculate FLOPs? I look forward to hearing from you.I'm sorry to bother you. Have a nice life!

zhanzheng8585 commented 4 months ago

Thank you for your interest! As demonstrated in Appendix D.1, Evaluation Metrics Explanation – Training FLOPs, we have a clear process for calculating training FLOPs. We strictly adhered to the training FLOPs calculation methodology used in Google's RigL, as detailed in "Rigging the Lottery: Making All Tickets Winners," which we also cite in our experimental section. I hope this could help.