Closed h1yc2 closed 4 months ago
Thank you for your interest! As demonstrated in Appendix D.1, Evaluation Metrics Explanation – Training FLOPs, we have a clear process for calculating training FLOPs. We strictly adhered to the training FLOPs calculation methodology used in Google's RigL, as detailed in "Rigging the Lottery: Making All Tickets Winners," which we also cite in our experimental section. I hope this could help.
Hello, I am a student of continual Learning and I am very interested in your article SparCL: Sparse Continual Learning on the Edge. I saw that you mentioned that DGM method can reduce FLOPs, but I did not find the relevant calculation in your public code.May I ask how FLOPs are calculated in your experiment or how do you calculate FLOPs? I look forward to hearing from you.I'm sorry to bother you. Have a nice life!