Open lijing-coder opened 1 year ago
Have you ever compared using the superpix generated by ouyang directly for AD-Net training?
Thank you for your excellent work and look forward to your answer!
Have you ever compared using the superpix generated by ouyang directly for AD-Net training?
Yeah, We have found the similar problem in some hard cases when you process 3D data. We didn't compare these two approaches. For a certain organ, we get different superpixel class ids on different slices by using superpixel segmentation slice by slice, thus we are unable to select adjacent 3 slices based on a specific superpixel classes id. It is a good idea to do the comparison.
Supervoxels were generated for 3D data in AD-Net, but when I displayed the images in the generated Supervoxels, the quality of the generated Supervoxels was not good in terms of pure visual perception. I also displayed the superpix of ouyang under the same image, which felt good. Specific visible images.
This picture is the 33rd image of image_37.nii.gz, the top two images are taken from the supervoxel generated by AD-Net, and the following images are the superpix generated by ouyang.
This picture is the ground truth.