cwmok / Conditional_LapIRN

Conditional Deformable Image Registration with Convolutional Neural Network
MIT License
80 stars 16 forks source link

validation TRE and CT data preprocessing #11

Closed annareithmeir closed 1 year ago

annareithmeir commented 1 year ago

Thanks a lot for this nice work!

I have 3 questions:

1) I want to evaluate the landmark TRE in the validation step during training and also use it as an evaluation metric in the inference step. How can I transform the n 3d landmarks (array of [n,3]) with the estimated transformation from a forward pass to the trained model?

2) Do you have any recommendations for the validation step? Currently, I run multiple forward passes over a predefined range of lambdas and then take the best result

3) I am a little confused about the preprocessing of lung CT data. In file:///home/nnrthmr/Downloads/Learn2Reg2021_Tony_oral-1.pdf it says that you performed windowing of [100, 1518]. Are these the ranges of the pixel values after clipping or are they the window level and window width? Also, do I need to perform the clipping first and then normalize to [0,1] or the other way around?

Thanks a lot!

cwmok commented 1 year ago

Hi @annareithmeir,

  1. I first transform the F_X_Y to unnormalized displacement field and applied it to the landmarks using map_coordinates function.

    def compute_tre(x, y, spacing=(1, 1, 1)):
    return np.linalg.norm((x - y) * spacing, axis=1)
    F_X_Y_xyz = torch.zeros(F_X_Y.shape, dtype=F_X_Y.dtype, device=F_X_Y.device)
    _, _, x, y, z = F_X_Y.shape
    F_X_Y_xyz[0, 0] = F_X_Y[0, 2] * (x - 1) / 2
    F_X_Y_xyz[0, 1] = F_X_Y[0, 1] * (y - 1) / 2
    F_X_Y_xyz[0, 2] = F_X_Y[0, 0] * (z - 1) / 2
    
    F_X_Y_xyz_cpu = F_X_Y_xyz.data.cpu().numpy()[0]
    
    moving_keypoint = keypoints[:, 3:]
    fixed_keypoint = keypoints[:, :3]
    
    fixed_disp_x = map_coordinates(F_X_Y_xyz_cpu[0], fixed_keypoint.transpose())
    fixed_disp_y = map_coordinates(F_X_Y_xyz_cpu[1], fixed_keypoint.transpose())
    fixed_disp_z = map_coordinates(F_X_Y_xyz_cpu[2], fixed_keypoint.transpose())
    lms_fixed_disp = np.array((fixed_disp_x, fixed_disp_y, fixed_disp_z)).transpose()
    
    warped_fixed_keypoint = fixed_keypoint + lms_fixed_disp
    tre_score = compute_tre(warped_fixed_keypoint, moving_keypoint, spacing=(1.5, 1.5, 1.5)).mean()
    tre_total.append(tre_score)
  2. I didn't run multiple forward passes during validation. Instead, I will just set a fixed hyperparameter for the validation and run one forward pass. After training the model, I will select the model with the best TRE in the validation set, and then search for the best hyperparameter using the well-trained model for the validation set.

  3. windowing of [100, 1518] = np.clip(fixed_img, a_min=100, a_max=1518). But this is not the best approach. You should use a lung lobe mask to mask the similarity and use a wider window, i.e., np.clip(fixed_img, a_min=-1100, a_max=2518). And yes, you need to perform the clipping first and then normalize to [0,1]. In this year's MICCAI, I will present an improved LapIRN for this task.

Hope the above comments help.

Regards, Tony

annareithmeir commented 1 year ago

Thanks a lot! Looking forward to the improved LapIRN!

annareithmeir commented 1 year ago

Just for clarification: By map_coordinates you mean the scipy.ndimage.map_coordinates function?

cwmok commented 1 year ago

Yes.