Closed XiaoJia849 closed 2 months ago
Hi, Thanks for your interest in MOMENT and this is certainly not a silly question!
During fine-tuning, we fine-tune MOMENT on an unsupervised reconstruction task i.e. we fine-tune the pre-training (reconstruction) head on in-domain time series, on a time series reconstruction task without any labels. The assumption is that a model that can reconstruct input time series well, can also identify anomalies, i.e. time series sub-sequences which the model is unable to predict well.
We only use the labels for evaluation.
Let us know if you have any more questions.
Best, Mononito
may be this is a stupid question, but i really want to know, why the batch_x.labels don`t attend the loss calculation ? thanks!
here is the code in /moment/tasks/anomaly_detection_finetune.py