thuml / SimMTM

About Code release for "SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling" (NeurIPS 2023 Spotlight), https://arxiv.org/abs/2302.00861
107 stars 13 forks source link

Bug in classification finetuning #2

Closed pranftw closed 4 months ago

pranftw commented 9 months ago

Hey authors,

Thanks for making your code publicly available. I noticed that in the classification finetuning training loop, .zero_grad() has not been called before .backward() for the model and classifier optimizers. I just wanted to confirm if this is a feature or a bug. I've attached the link to the lines below. Thanks again and hoping to hear back from you soon!

https://github.com/thuml/SimMTM/blob/b4b676ef092dd1ae928b06f371edeb51f489c8f8/SimMTM_Classification/code/trainer.py#L193C1-L196C36

loss.backward()
model_optimizer.step()
classifier_optimizer.step()
dongjiaxiang commented 9 months ago

Thank you for reminding. This is a bug that appears when we reorganize the code, and we will fix it as soon as possible.