Open Ltieregenius opened 5 years ago
Are those your pre-trained results or?
labels/membranes
is the membrane labeling provided by the challenge and labels/gt_segmentation
is a 3d segmentation derived from labels/membranes
. This is a necessary pre-processsing step in order to train the affinity network. affinities
is the prediction of the CNN.
Do the results seem that only the mutex-watershed cannot get good segmentation results?
Sorry, I don't understand what you mean by this question.
Sorry to bother you again. Actually three days ago I followed your steps
and choose the 'mws' and 'thresh' algorithms to run and it works the results as:
As we can see that the width of the cell membrane's boundary is really clear and seems to be a good segmentation result. However, when I compared these results to the ground truth:
There are still obvious differences between my output results and labels. Then I use Fiji to calculate the v-rand and the v-info:
According to the evaluation results, my question is that: (1) Do i need to do the 'dilution' to imporve the width of the boundary or do some pre-processing? (2) could you plz point out my mistakes when i run your code like i can fix some parameters to imporve my results.
There are still obvious differences between my output results and labels.
Of course, the algorithm will not 100 % reproduce the ground-truth. That cannot be expected. Plaese note that there are also a lot of ambiguous places in these segmentations.
According to the evaluation results, my question is that: (1) Do i need to do the 'dilution' to imporve the width of the boundary or do some pre-processing? (2) could you plz point out my mistakes when i run your code like i can fix some parameters to imporve my results.
Yes, dilation
might improve the results a bit, but there seems to be something more fundementally wrong with the evaluation you are running. The segmentation looks much better then the numbers you report.
As a sanity check, I would evaluate groundtruth against groundtruth and make sure that this yields a perfect score. Alternatively, use some other evaluation code, e.g. https://github.com/cremi/cremi_python/tree/master/cremi/evaluation.
Thanks for your sharing. I tried to open isbi_train_volume.h5 and it contains ['affinities', 'labels', 'raw'], also the f['labels'] contains 'gt_segmentation' and 'membrabes'. Are those your pre-trained results or? Do the results seem that only the mutex-watershed cannot get good segmentation results?