mims-harvard / TFC-pretraining

Self-supervised contrastive learning for time series via time-frequency consistency
https://zitniklab.hms.harvard.edu/projects/TF-C/
MIT License
422 stars 78 forks source link

Time-Frequency Consistency Loss is not utilized #21

Open xiaoyuan7 opened 1 year ago

xiaoyuan7 commented 1 year ago

I noticed that the Time-Frequency Consistency Loss is not being utilized in your code. Could you please confirm whether this is intentional or not? And if it is not being used intentionally, could you please explain the reason behind it and its potential impact on the model's performance? image

1057699668 commented 1 year ago

Hello, I noticed this too. Therefore, I modified the loss function to use the time-frequency consistency loss, and the final experimental results obtained differed significantly from the paper. I hope the author can answer this doubt for us.

yuyunannan commented 1 year ago

Can you get good results from the other three experiments? How to set the parameters?

1057699668 commented 1 year ago

Can you get good results from the other three experiments? How to set the parameters?

Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings.

1057699668 commented 1 year ago

Can you get good results from the other three experiments? How to set the parameters?

Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings.

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

yuyunannan commented 1 year ago

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

1057699668 commented 1 year ago

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

Perhaps only the author can answer these questions for us.

zzj2404 commented 1 year ago

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

have you solve the problem of subset?

1057699668 commented 1 year ago

  Sorry, I haven't solved the subset problem yet. Maybe the author only gave the correct settings for the SleepEEG → Epilepsy experiment.

------------------ 原始邮件 ------------------ 发件人: "mims-harvard/TFC-pretraining" @.>; 发送时间: 2023年4月1日(星期六) 晚上7:53 @.>; @.**@.>; 主题: Re: [mims-harvard/TFC-pretraining] Time-Frequency Consistency Loss is not utilized (Issue #21)

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

have you solve the problem of subset?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

JohnLone00 commented 1 year ago

对不起,我还没有解决子集问题。也许作者只给出了 SleepEEG → Epilepsy 实验的正确设置。 …… ------------------ 原始邮件 ---------------- 发件人: "mims-harvard /TFC-预训练" @.>; 发送时间: 2023年4月1日(星期六)晚上7:53 @.>; @.**@.>; 主题: Re: [mims-harvard/TFC-pretraining] Time-Frequency Consistency Loss is not utilized (Issue #21) I also tried pre-training and fine-tuning using other datasets, but it got bad performance. I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad have you solve the problem of subset? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

The author's code seems to have some problems, using the torch's API, TransformerEncoderLayer, in the backbone network, however it does not set the batch_first's to true, according to the author's data format, batch_size should be in the first place, and it does not seem reasonable to use TransformerEncoder for single channel time series input.

maxxu05 commented 1 year ago

Yes this has also been mentioned in the other issue 19 as well. I agree that the single channel time-series input doesn't make sense, especially since the transformer is currently coded such that the "time" of the self-attention mechanism is actually the singular channel. In this way, the size of the self-attention mechanism is attending over is only 1.