Closed YuigaWada closed 10 months ago
Hello, thanks for releasing this inspiring work.
I've been working diligently to reproduce the results from Table 2 in your paper. However, I encountered some discrepancies in my reproduction attempts, specifically observing lower dice scores than reported.
The Appendix mentions the use of a Sliding Window Inference technique, but I couldn't find the corresponding code in this repository. Specifically, I'm interested in the code to perform Sliding Window Inference on the model where the prompt encoder is discarded, and only the image encoder and mask decoder are tuned for fully automatic segmentation.
Having the official code would ensure reproducibility and fidelity to the original methods presented in the paper. It would be of immense help if you could provide or point me to the relevant code.
Regards!
Thanks for your suggestions. I will organize and post it soon.
Hello, thanks for releasing this inspiring work.
I've been working diligently to reproduce the results from Table 2 in your paper. However, I encountered some discrepancies in my reproduction attempts, specifically observing lower dice scores than reported.
The Appendix mentions the use of a Sliding Window Inference technique, but I couldn't find the corresponding code in this repository. Specifically, I'm interested in the code to perform Sliding Window Inference on the model where the prompt encoder is discarded, and only the image encoder and mask decoder are tuned for fully automatic segmentation.
Having the official code would ensure reproducibility and fidelity to the original methods presented in the paper. It would be of immense help if you could provide or point me to the relevant code.
Regards!
Hello, The code has been uploaded in 3DSAM-adapter/train_auto.py and test_auto.py.
Thank you for your prompt response! I have a few concerns. First, are the lines:
https://github.com/med-air/3DSAM-adapter/blob/b993875cfcb88ff0841d68bab3bb742c793fb68f/3DSAM-adapter/train_auto.py#L58 https://github.com/med-air/3DSAM-adapter/blob/b993875cfcb88ff0841d68bab3bb742c793fb68f/3DSAM-adapter/test_auto.py#L61
correct? Should it be (128, 128, 128)
instead?
Also, regarding:
This seems to be slightly different from what's mentioned in the appendix.
For instance, the appendix states overlap=0.7
and mode="constant"
. Is this the correct implementation?
Thank you for your prompt response! I have a few concerns. First, are the lines:
correct? Should it be
(128, 128, 128)
instead?Also, regarding:
This seems to be slightly different from what's mentioned in the appendix. For instance, the appendix states
overlap=0.7
andmode="constant"
. Is this the correct implementation?
Hi,
For the patch size, the automatic version uses a different value from the interactive segmentation version. We find using 128 results in poor performance for all methods. So we use 256 instead.
The overlap and mode are less sensitive. The higher overlap gives better performance but is slower. For mode, different dataset benefits from different mode, but the improvement is minor.
Thank you for clarifying. I understand the choice of 256 for the patch size and the considerations regarding overlap and mode. Your explanation helps me.
I'm going to try reproducing the results based on these codes. I'll close this issue for now and will reopen if any further issues arise.
Hello, thanks for releasing this inspiring work.
I've been working diligently to reproduce the results from Table 2 in your paper. However, I encountered some discrepancies in my reproduction attempts, specifically observing lower dice scores than reported.
The Appendix mentions the use of a Sliding Window Inference technique, but I couldn't find the corresponding code in this repository. Specifically, I'm interested in the code to perform Sliding Window Inference on the model where the prompt encoder is discarded, and only the image encoder and mask decoder are tuned for fully automatic segmentation.
Having the official code would ensure reproducibility and fidelity to the original methods presented in the paper. It would be of immense help if you could provide or point me to the relevant code.
Regards!