Closed lofrienger closed 1 year ago
Sorry for the late reply.
Dataset Split SUN-SEG has no validation set but has two testing sets: Easy-testing set and Hard-testing set. Therefore, in our experiments, we take Hard-testing dataset as the validation set to select the best-performance model. To avoid the data leakage issue, you can mainly focus on the performance of Easy-testing dataset, which is more convincing.
Training Hyperparameter Before we release the codes, we run the codes again and find the SGD optimizer can achieve more stable performance than AdamW on the task of weakly-supervised polyp segmentation. Therefore, we replace the original optimizer from AdamW to SGD. Consequently, the learning rate is changed to adapt to SGD optimizer.
Thank you for your clarification!
I find your work quite intriguing and inspirational! However, I have a few questions regarding the replication of your experiments. Specifically:
Dataset Split: In the train.py script, I noticed that the SUN-SEG test dataset is used during training as the evaluation dataset for saving the best-performing model. However, it's also utilized in the test.py script as the test dataset, and the results are reported in the paper. This appears to potentially introduce a data leakage issue. Could you please provide clarification regarding the dataset split settings? I want to ensure I understand your configuration correctly.
Training Hyperparameters: I observed that some of the hyperparameters in the provided code differ from what's mentioned in the paper. For instance, the initial learning rate and the choice of optimizer appear inconsistent. It would be greatly appreciated if you could provide some clarification on these hyperparameters.
Thank you for your assistance!