Closed pranav-deo closed 4 years ago
Hi!
You can run the nucleus segmentation training script: https://github.com/SBU-BMI/quip_cnn_segmentation/blob/master/segmentation-of-nuclei/READMD.md#training
However, we do not save refined images during training by default. To turn it on, set patch_dump to True: https://github.com/SBU-BMI/quip_cnn_segmentation/blob/master/segmentation-of-nuclei/buffer.py#L22
Then, the image saving logics will be turn on here: https://github.com/SBU-BMI/quip_cnn_segmentation/blob/master/segmentation-of-nuclei/buffer.py#L36
Thx
Thank you for the prompt reply!
I still have some doubts.
During the training phase, what data should go in the folder ./data/nuclei/real
? Data extracted by training-data-synthesis
includes directories - contour
, cyto
, detect
, image
, intp_mask
, mask
, nucl
, refer
, source
and no directory named real
.
Are we supposed to do some processing on masks of training-data-synthesis
before feeding them to segmentation-of-nuclei
? Because according to README, former gives masks with 3 three bits and later requires masks with 2 bits only.
What is a good size of data samples in the required data folder for segmentation-of-nuclei
in each of the folders?
Thanks a lot for your support! :smiley:
Hello!
I want to generate synthetic images with nuclear masks. For that, I extracted 4000 x 4000 px from a whole slide and ran draw_fake.sh from training-data-synthesis directory and I got the initial synthetic images with masks. I'm frankly lost what to do after getting these images. Where do I get the pre-trained CNN weights and how do I run the further scripts?
Any help will be highly appreciated. TIA!