Closed luoyi1hao closed 4 years ago
Whether to use the transformed image to segment first, and then test
Hello, I would like to ask if the result of the experiment is obtained under the image size of 64x64
Hello @luoyi1hao,
In testing, you have to load the images and their respective segmentations, and then use that images to get the displacement field. To warp a source segmentation using the obtained displacement field, you can do the following:
The results of the experiments are obtained in 256x256. To calculate the Dice values, you have to pass to the function get_dice_metric (utils.py) the warped source segmentation and the target segmentation (ground truth). The metric value is calculated by anatomical structure (by label); you can get the result by label or the average. For the distance metrics you proceed in the same way, except that you have to set the pixel distance.
I hope this helps you.
Thank you for your help
Hello, thanks for the previous help. I encountered confusion when testing the performance of the network. Did you use the 6464 input network when testing 256256 size pictures? (4 times larger displacement field)
Hello @luoyi1hao. Yes. In testing, the displacement field produced by the 64x64 registration network is resized to the input image size in order to perform the registration.
Thank you @lucasmansilla .
Hello, thanks for your previous help. I now want to experiment on the Montgomery and Shenzhen datasets. Can I provide you with preprocessed images on these two datasets?
Excuse me, is it the first to use simpleElastix for affine transformation for preprocessing? But the result of the Elastix experiment is to use deformable transformation.
Hello @luoyi1hao. In the experiments with SimpleElastix, we use affine transformation as initialization of the deformable registration process. Here, you can see how it works: https://github.com/lucasmansilla/Multi-Atlas_RCA/blob/master/register.py.
About the preprocessed images of the Montgomery and Shenzhen datasets, I'm going to upload them to Google Drive in the next few days. If you give me your email address, then I can send you the download link.
Thank you so much for your help. My email is a582327031@163.com.
Hello, thanks for your previous help. I have performed further preprocessing on the other two data sets, but the experimental results are not very good. Could you please provide us with the data used for experiments in the article like JSRT, such as 6464 and 256256 images in Montgomery, 6464 and 30003000 images in Shenzhen.
Hello @luoyi1hao. Yes, of course, I'll send you those datasets as soon as I can. Regards.
Hello, I used the predicted displacement field to directly transform the segmentation mask during the test work. However, the test indicators calculated in this way seem to be not very accurate. I would like to ask how to calculate these indicators.