Open fangfangtuTang opened 1 week ago
The keypoints_sampler function assumes key points are provided in voxel space. For an input image with dimension HxWxD, each key point (x, y, z) should have x\in[0, W), y\in[0, H), z\in[0, D). If your data augmentation includes spatial transformations (e.g., random flipping), you will need to adjust the key points accordingly. You can see https://github.com/yihao6/vfa/blob/main/vfa/datasets/l2r2022nlst_dataset.py for an example of loading the key points for the L2R 2022 NLST dataset. https://github.com/yihao6/vfa/blob/main/vfa/models/base_model.py should also be helpful for training with key points.
I would like to train your model with my own dataset which includes the paired landmarks for both fixed and moving images, and I would like to deform these landmarks directly using the keypoints_sampler function inside your model. My question is what conditions do these landmarks need to meet?