lucasmansilla / ACRN_Chest_X-ray_IA

Learning Deformable Registration of Medical Images with Anatomical Constraints
MIT License
23 stars 9 forks source link

How to calculate other indicators such as dice during the test #2

Closed luoyi1hao closed 4 years ago

luoyi1hao commented 4 years ago

Hello, I used the predicted displacement field to directly transform the segmentation mask during the test work. However, the test indicators calculated in this way seem to be not very accurate. I would like to ask how to calculate these indicators.

luoyi1hao commented 4 years ago

Whether to use the transformed image to segment first, and then test

luoyi1hao commented 4 years ago

Hello, I would like to ask if the result of the experiment is obtained under the image size of 64x64

lucasmansilla commented 4 years ago

Hello @luoyi1hao,

In testing, you have to load the images and their respective segmentations, and then use that images to get the displacement field. To warp a source segmentation using the obtained displacement field, you can do the following:

  1. Convert the hard segmentation to one-hot format. For this, you can use the function get_one_hot_encoding_from_hard_segm (utils.py).
  2. Warp the segmentation with the function batch_displacement_warp2d (displacement.py); you have to pass the one-hot segmentation, the displacement field and set to True the value of the variable vector_fields_in_pixel_space.
  3. Convert the warped segmentation back to hard segmentation. For this, you can use the function get_hard_segm_from_prob_map (utils.py).

The results of the experiments are obtained in 256x256. To calculate the Dice values, you have to pass to the function get_dice_metric (utils.py) the warped source segmentation and the target segmentation (ground truth). The metric value is calculated by anatomical structure (by label); you can get the result by label or the average. For the distance metrics you proceed in the same way, except that you have to set the pixel distance.

I hope this helps you.

luoyi1hao commented 4 years ago

Thank you for your help

luoyi1hao commented 4 years ago

Hello, thanks for the previous help. I encountered confusion when testing the performance of the network. Did you use the 6464 input network when testing 256256 size pictures? (4 times larger displacement field)

lucasmansilla commented 4 years ago

Hello @luoyi1hao. Yes. In testing, the displacement field produced by the 64x64 registration network is resized to the input image size in order to perform the registration.

luoyi1hao commented 4 years ago

Thank you @lucasmansilla .

luoyi1hao commented 4 years ago

Hello, thanks for your previous help. I now want to experiment on the Montgomery and Shenzhen datasets. Can I provide you with preprocessed images on these two datasets?

luoyi1hao commented 4 years ago

Excuse me, is it the first to use simpleElastix for affine transformation for preprocessing? But the result of the Elastix experiment is to use deformable transformation.

lucasmansilla commented 4 years ago

Hello @luoyi1hao. In the experiments with SimpleElastix, we use affine transformation as initialization of the deformable registration process. Here, you can see how it works: https://github.com/lucasmansilla/Multi-Atlas_RCA/blob/master/register.py.

About the preprocessed images of the Montgomery and Shenzhen datasets, I'm going to upload them to Google Drive in the next few days. If you give me your email address, then I can send you the download link.

luoyi1hao commented 4 years ago

Thank you so much for your help. My email is a582327031@163.com.

luoyi1hao commented 4 years ago

Hello, thanks for your previous help. I have performed further preprocessing on the other two data sets, but the experimental results are not very good. Could you please provide us with the data used for experiments in the article like JSRT, such as 6464 and 256256 images in Montgomery, 6464 and 30003000 images in Shenzhen.

lucasmansilla commented 4 years ago

Hello @luoyi1hao. Yes, of course, I'll send you those datasets as soon as I can. Regards.