CSCYQJ / MICCAI23-ProtoContra-SFDA

This is the official code of MICCAI23 paper "Source-Free Domain Adaptation for Medical Image Segmentation via Prototype-Anchored Feature Alignment and Contrastive Learning"
23 stars 6 forks source link

Question of PFA process #6

Open YYinn opened 10 months ago

YYinn commented 10 months ago

Hi! Thank you for the author's sharing! The code are very well-organized.

However, when training in the CT-MR domain adaptation model, the source domain training works well with results reaching 0.90, but in the target domain, after PFA processing, the results is not satisfied. I noticed that some of the parameters in the corresponding yaml file are not the same as the values in the paper. For example, the total_epoch in the PFA yaml file is set to 5, while the paper mentions 200 iterations, which should be around 12 epochs or more. Could this be the reason, or could there be other settings causing incorrect results?

Thank you!

YYinn commented 10 months ago

I found a possible reason that I repeat the channel in the network instead of in dataloader, which may cause differences after augmentation process.

YYinn commented 10 months ago

The result of PFA and CL stage can not reach the expectation as it in the paper.

Could author provide the preprocessed data? Thanks a lot!

CSCYQJ commented 10 months ago

Thanks for your questions! Considering your results, I'd like to know the no-adaptation performance for your source domain model on the target domain (eg, trained on MR and directly test on CT). In my experiments, I found the initial no-adaptation results are important for the following adaptation.

YYinn commented 10 months ago

It does seem to be the issue. I've noticed that the model trained on CT performs very poorly when tested directly on MR, with a averaged result of only 0.0005, which is highly suspicious. However, I haven't yet identified the specific reason. But after applying FPA, the results improve to around 0.81, so I didn't initially realize that this might be the issue. It's quite puzzling.

zqp1226358 commented 9 months ago

@YYinn Hello,For the CT source domain training stage, my dice result is NAN, but the dataset has been cropped according to the paper, have you encountered a similar situation? Also, when you reproduce, is the CT image input channel a single channel? image

Thank you again!

zqp1226358 commented 9 months ago

@YYinn Hello,For the CT source domain training stage, my dice result is NAN, but the dataset has been cropped according to the paper, have you encountered a similar situation? Also, when you reproduce, is the CT image input channel a single channel? image

Thank you again!

Thanks, I've found the problem!

qwerasdzxcvb commented 4 months ago

@zqp1226358 请问你有解决dice是NAN的问题吗,我也遇到了

zqp1226358 commented 4 months ago

@zqp1226358 请问你有解决dice是NAN的问题吗,我也遇到了

@qwerasdzxcvb It may be that the dataset is still not well processed, you can refer to the code of the preprocess.ipynb

qwerasdzxcvb commented 4 months ago

@zqp1226358 ,非常感谢你的回答,还想问一下在CTtoMRI阶段的MR数据集和CT处理的一样吗,我的dice很低很低, 只有0.1,并且在source_train阶段的dice只有0.7,是我没有处理好数据吗,还是需要更改什么参数

zqp1226358 commented 4 months ago

@zqp1226358 ,非常感谢你的回答,还想问一下在CTtoMRI阶段的MR数据集和CT处理的一样吗,我的dice很低很低, 只有0.1,并且在source_train阶段的dice只有0.7,是我没有处理好数据吗,还是需要更改什么参数

我忘了具体细节,但是按照作者处理的代码,源域训练的结果大致可以到80%到90%左右,很大可能还是你没有处理好数据。

qwerasdzxcvb commented 4 months ago

@zqp1226358 ,好的好的,可能是我自己处理的有问。 还有上边提到的 preprocess.ipynb,没有看到作者发布呢,是您自己写的吗

zqp1226358 commented 4 months ago

@qwerasdzxcvb 在作者提交历史里面有,实在找不到,你可以留一个邮箱我发给你一份

qwerasdzxcvb commented 4 months ago

@zqp1226358 找到啦,谢谢你

BarY7 commented 3 months ago

@qwerasdzxcvb Hi, did you manage to get the same results as the authors?