Closed RomanKoshkin closed 1 year ago
Appendix E said using batch size of 64 and training epoch of 40.
Appendix E said using batch size of 64 and training epoch of 40.
Okay, but if the number of fine-tuning samples is 60 (Section 5.1. Setup, last line), how can you make batches of size 64 if the total number of fine-tuning samples is 60? Sorry, I must be missing something.
In config_files, you can see that the batch size is 60. One epoch means one batch in this setting(the number of fine-tuning samples 60 is less than 64). But there are still some other questions. I think the code is not the latest version,because i can’t get the same result if i use the pre-trained checkpoint to fine-tune.
In config_files, you can see that the batch size is 60. One epoch means one batch in this setting(the number of fine-tuning samples 60 is less than 64). But there are still some other questions. I think the code is not the latest version,because i can’t get the same result if i use the pre-trained checkpoint to fine-tune.
Same here, I run the "SleepEEG -> Epilepsy". However, the results on Precision, Recall and F1 are really different from the paper reported.
In config_files, you can see that the batch size is 60. One epoch means one batch in this setting(the number of fine-tuning samples 60 is less than 64). But there are still some other questions. I think the code is not the latest version,because i can’t get the same result if i use the pre-trained checkpoint to fine-tune.
Same here, I run the "SleepEEG -> Epilepsy". However, the results on Precision, Recall and F1 are really different from the paper reported.
The results you got is ?
In config_files, you can see that the batch size is 60. One epoch means one batch in this setting(the number of fine-tuning samples 60 is less than 64). But there are still some other questions. I think the code is not the latest version,because i can’t get the same result if i use the pre-trained checkpoint to fine-tune.
Same here, I run the "SleepEEG -> Epilepsy". However, the results on Precision, Recall and F1 are really different from the paper reported.
The results you got is ?
I got around 65 F1 with 5 random seeds.
Dear guys,
I'm so sorry for the previous messy version. I was transferring to a new job and didn't get time to clean the codes.
We have updated the TFC implementation. I assure you that it should be good without bugs. Please check more details in the Updates on Jan 2023 section of the repo readme. In summary:
Hi @RomanKoshkin, for your questions specifically:
In pertaining on SleepEEG, the batch size is 128. I run it for 200 epochs, however, I also tried 40, 100, 300 epochs, and found that the 200 epochs is a better choice than 40.
In fine-tuning on Epilepsy, I set the batch size is 60 as we only have 60 samples. In fine-tuning, the model converges very fast, I got ~ 87% on F1 in less than 10 epochs, but it goes to steady after this number. So, I'll say setting the epoch as 20 and 40 are both good choices. In the updated version, I set it as 20.
In the paper, you say that for the 'SleepEEG -> Epilepsy' transfer you only fine-tune on 60 samples of the target data (Epilepsy). How many epochs? Is this really correct, given that the classifier has 16578 parameters? Also, for how many epochs did you pre-train the TF-C feature extractor on the SleepEEG data? One final one: I diffed this repo with the one at https://anonymous.4open.science/r/TFC-pretraining-6B07/README.md and they are the same (at least the code that implements the model, data augmentation and loading). Do you have a more recent version of the code? I can't fine-tune on the 60 samples. Thanks!