Open bbb-x1 opened 2 years ago
When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py".
In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs.
For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2.
Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.
When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py".
In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs.
For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2.
Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.
In addition to the differences you just mentioned, are there any changes in the experimental settings of BCSS-WSSS dataset and LUAD-HistoSeg dataset?
When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py". In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs. For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2. Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.
In addition to the differences you just mentioned, are there any changes in the experimental settings of BCSS-WSSS dataset and LUAD-HistoSeg dataset?
No
When refactoring the code used to upload GitHub, we forgot to distinguish the manner of pseudo-annotation generation for two data-sets. Thanks for your question to help us find this mistake in the reconstructed program and we have corrected it in "WSSS-Tissue/tool/infer_fun.py". In the pseudo-annotation generation phase, "bg_score" is the white area generated by "cv2.threshold". Since lungs are the main organ of the respiratory system. There are a lot of alveoli (some air sacs) serving for exchanging the oxygen and carbon dioxide, which forms some white background in WSIs. For the LUAD-HistoSeg dataset, we use it in the pseudo-annotation generation phase to avoid some meaningless areas to participate in the training phase of stage2. Since the white background of images of breast cancer is meaningful (e.g. fat, etc), we do not use "bg_score" to generate PM for BCSS-WSSS.
In addition to the differences you just mentioned, are there any changes in the experimental settings of BCSS-WSSS dataset and LUAD-HistoSeg dataset?
No
thank you for reply, best wishs to you.
Hi. I have some trouble when training the segmentation model, the iou of fourth category(NEC) in evaluation phase is too small, I want to know this is normal? here are my partial output in BCSS-WSSS dataset:
In a large part, the best mIoU rely on first epoch, the output in test phase as follows:
here are my experiment setting:
Looking forward to your reply