ChuHan89 / WSSS-Tissue

49 stars 8 forks source link

some trouble in training second phase. #6

Open bbb-x1 opened 2 years ago

bbb-x1 commented 2 years ago

Hi. I have some trouble when training the segmentation model, the iou of fourth category(NEC) in evaluation phase is too small, I want to know this is normal? here are my partial output in BCSS-WSSS dataset:

[Epoch:  7] IoUs:  [0.05311064 0.43155792 0.08846831 0.00664564]
[Epoch:  8] IoUs:  [0.64454513 0.23336869 0.33051128 0.18317763]
[Epoch:  9] IoUs:  [0.44317084 0.2187485  0.2805655  0.07169297]
[Epoch: 10] IoUs:  [0.4636515  0.34549918 0.         0.00055196]
[Epoch: 11] IoUs:  [5.27681240e-01 5.47585995e-01 3.29778434e-02 4.48991627e-04]
[Epoch: 12] IoUs:  [0.55640191 0.25940983 0.49560928 0.24347749]
[Epoch: 13] IoUs:  [3.79193170e-01 5.11070183e-01 3.57815322e-05 3.94131090e-04]
[Epoch: 14] IoUs:  [0.66979842 0.47498874 0.39502136 0.11608691]
[Epoch: 15] IoUs:  [0.041324   0.45222938 0.         0.02940076]
[Epoch: 16] IoUs:  [0.48411825 0.06914585 0.49067666 0.        ]
[Epoch: 17] IoUs:  [0.49002653 0.17553584 0.24705208 0.0619311 ]
[Epoch: 18] IoUs:  [0.67300695 0.54214751 0.54568384 0.12169068]
[Epoch: 19] IoUs:  [0.65041296 0.35260456 0.44456822 0.3946784 ]
[Epoch: 20] IoUs:  [0.39182901 0.0125952  0.         0.00041954]
[Epoch: 21] IoUs:  [0.15875704 0.49077868 0.27995287 0.        ]
[Epoch: 22] IoUs:  [4.44257401e-01 2.64663368e-01 7.24316917e-07 0.00000000e+00]

In a large part, the best mIoU rely on first epoch, the output in test phase as follows:

Test:
[numImages: 99701]
Acc:0.822067498179687, Acc_class:0.7369570210152916, mIoU:0.6347199910263178, fwIoU: 0.6977199208680192
Loss: 0.000
IoUs:  [0.7510428  0.70263782 0.48664593 0.59855342]

here are my experiment setting:

backbone: resnet
out-stride: 16
dataset: bcss
loss-type: ce
epochs: 30
batch-size: 20
lr: 0.07
lr-scheduler: poly
ft: False (default is True, but occurs error.)
resume: init_weights/deeplab-resnet.pth.tar

Looking forward to your reply

linjiatai commented 2 years ago

When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py".

In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs.

For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2.

Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.

bbb-x1 commented 2 years ago

When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py".

In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs.

For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2.

Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.

In addition to the differences you just mentioned, are there any changes in the experimental settings of BCSS-WSSS dataset and LUAD-HistoSeg dataset?

linjiatai commented 2 years ago

When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py". In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs. For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2. Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.

In addition to the differences you just mentioned, are there any changes in the experimental settings of BCSS-WSSS dataset and LUAD-HistoSeg dataset?

No

bbb-x1 commented 2 years ago

When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py". In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs. For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2. Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.

In addition to the differences you just mentioned, are there any changes in the experimental settings of BCSS-WSSS dataset and LUAD-HistoSeg dataset?

No

thank you for reply, best wishs to you.