tntek / DSPNet

MIT License
6 stars 0 forks source link

Questions about the results of the experimental comparison #1

Open WangBingJian233 opened 3 months ago

WangBingJian233 commented 3 months ago

Hello author. Setting-1 is the initial setting proposed in (Roy et al., 2020), where test classes may appear in the background of training images. My understanding is that the experimental results in setting-1 should be the same or similar to those in the comparison model (e.g. SSL-ALPNet, Q-Net) papers. Why are their results in your paper setting-1 so much lower than in their original paper? Hope to get your reply, thank you.

tntek commented 3 months ago

Thanks for your attention. In this work,all comparisons follows setting-1 or setting-2. For a fair comparison, besides Roy‘s work,other comparisons’ results are obtained by re-running the official codes under the same testing bed to our method. This setting is stated in the Section of experimental setting. I have no idea about why the results cannot match those original papers.

By the way, our work builds on the SSL framework proposed by Ouyang. On our sever, Ouyang’s work also worse than the ones reported in the papers that you mentioned. If you add the gap on Ouyang to the results of other comparisons, those results will close to the originals. Based on this observation, your problem maybe be from a systematic shift caused by GPU selection, software version. Actually, 1:1 reproducing original results always be a challenge.

tntek commented 3 months ago

Welcome for other questions

WangBingJian233 commented 3 months ago

Thank you for your reply. Based on your paper and code, I understand that your data preprocessing for the three datasets is superpixel (the same way ouyang's SSL-ALPNet preprocessing is done). When you run Q-Net results, do you use superpixel or supervoxel? The Q-Net paper uses supervoxel (proposed by Hansen's AD-Net) rather than superpixel, and I think this may be one of the reasons why the Q-Net reproduction results differ so much from the original paper. By the way, is it convenient for you to give the superpixel file after data preprocessing of the CMR dataset? Thank you very much!

tntek commented 3 months ago

Q-Net uses volumepixe (3D) to implement segmentation as ADNet. We have founded that Q-Net’s performance will drop significantly if using superpixle(2D) like ours and Ouyang.

tntek commented 3 months ago

In our paper, Q-Net’s performance is based on the official code. Namely, it uses the volme pixle. Recently, I am in a trip. As it finished, I will give you the pseudo labels.

WangBingJian233 commented 3 months ago

Looking forward to your CMR dataset superpixel file. Thank you for your reply. Wish you success in your work and life!

WangBingJian233 commented 3 months ago

Sorry to bother you again, has your paper been officially accepted by Medical Image Analysis 2024? (I have not found it on the official website at present) Can you give me a new way of reference?