yhygao / CBIM-Medical-Image-Segmentation

A PyTorch framework for medical image segmentation
Apache License 2.0
272 stars 48 forks source link

About prediction.py #21

Open Axue666 opened 1 year ago

Axue666 commented 1 year ago

Hello @yhygao, Thank you for your work! I used the liver for prediction, but why did I get a blurry visualization result as shown in the following figure? The premise is that I have modified the code in the preprocessing function according to your method. This is what I have modified:

np_ img = np.clip(np_img, -17, 201)

np img = np img - 99.40

np img = np img / 39.39

I don't know what caused it. Please provide an answer. Thank you! 屏幕截图 2023-05-09 191335

yhygao commented 1 year ago

Are you dealing with the LiTS dataset with a 3D model? How is the training and validation curve? Is the validation dice look correct? I never have this issue before. But this block pattern looks like something is wrong with the window-based inference. You can debug in the inference/inference3d.py to make sure everything works as expected.

I'll check the prediction code on LiTS later. But I'm very busy recently, I can't guarantee a time to figure it out.

Axue666 commented 1 year ago

Firstly, thank you very much for the author's ability to respond to my questions. I used a LiTS dataset processed by a 3D model. The following figure shows the Dice curve after my training and the final cross validation results. Please check if they are correct. Additionally, I have a question about the preprocessing function in your prediction. py code

max98 = np.percentile(np_img, 98)

np_ img = np.clip(np_img, 0, 98)

np img = np img / max98

How did you get it? Why didn't you use the original code below, but instead commented it out because I saw that your prediction was based on kits and The following code is used in dataset_kits.py. I hope you can provide an answer. (Because my initial visualization results were blurry, I suspected that there was a problem with the code here.)

np_ img = np.clip(np_img, -79, 304)

np_ img -= 100.93

np_ img /= 76.90

Finally, thank you again. Students are very interested in your academic achievements and hope to learn more useful knowledge from them. Therefore, if time permits, you can check the prediction code for LiTS. Thank you! 交叉验证结果 屏幕截图 2023-05-09 191335

yhygao commented 1 year ago

The learning curve looks like the model learns something, but the accuracy is lower than in my experiments.

As I commented in the 164 line of prediction.py, you need to manually modify the intensity normalization during preprocessing to be consistent with training (the training preprocessing code can be found in training/dataset/dim3/dataset_lits.py if you are dealing with the LiTS dataset). So you need to modify lines 169-171 in prediction.py to the following:

img = np.clip(img, -17, 201)
img -= 99.40
img /= 39.39

This is because I followed the preprocessing streamline proposed in nnUNet, and every dataset has its own foreground intensity mean and std. So you need to modify the preprocessing code in the prediction.py according to your dataset. the current normalization

max98 = np.percentile(np_img, 98)
np_ img = np.clip(np_img, 0, 98)
np_ img = np_ img / max98

is for MR images. If you didn't modify lines 169-171 of the prediction.py to the correct one, the testing image distribution will be different with training, thus resulting in the non-sense predictions.

yhygao commented 1 year ago

I'll consider unifying the preprocessing of different datasets in a consistent way in the future commit, as I find different intensity normalization values don't have significant impact on the final performance.

Axue666 commented 1 year ago

作者您好!非常抱歉又在一次打扰您,很感谢您之前的回复,我有按照您告诉我的方法修改,将肝脏预处理部分的代码放到预测里,同时也保持与训练一致,实在是不知道到底是什么原因会导致预测的图片效果如此模糊,前两天关注到您有更新prediction.py代码。最近一直在研究这个肝脏分割方向,也很急用这个预测图片的代码,如果您有时间希望您能出一份可以能够预测肝脏分割图的代码。在此非常感谢!!!

yhygao commented 1 year ago

把预测时候的预处理和训练时的预处理统一起来理论上应该能够解决这个问题。我之前是在KiTS和ACDC上做的测试,没有什么问题。我最近会在LiTS上再试一下,我尽量这周末给出一个结果。

Axue666 commented 1 year ago

感谢您的回复!!!