Open epratheeban opened 1 month ago
Hi @epratheeban,
Thanks for your question.
Best, Qin
@qinliuliuqin Thanks for the message. Really interested in exploring further with your appraoch from the paper.
I am particualrly not intereseted in your dataset. As I have own 200 CT head dataset with segmentaion mask of bones and annotated 59 Cephalometric Landmarks. I managed to rewrite your code base to evaluate validation set during the training pipeline and save the best model based on the validation loss.
Is it possible if you could the share the config and model network used for training, along with code script used for the preprocessing the CT dicom or Nifti format. ?
Really appreciate if you could able to provide further information.
Thanks in advance.
Hi @epratheeban,
Thanks for your questions again. 1.Training configs can be found here. I refactored the code a lot, but the original training configs can be found there.
Let me know if you have more questions.
Hi @qinliuliuqin ,
Thank you so much for sharing the code. Looking through the training config, I have 1 question to get clarified before I adapt the code. If I understand correctly, you only used 8 landmarks per training config.
Am I right ?
Hi @epratheeban ,
Yes, we want to deploy our model on CPU machines for practical applications, so we split all landmarks into different groups and use a separate model to detect each group. We should have mentioned this in our paper. If you don't have memory constraints, you can simply merge all landmarks together and train a single model to detect them. However, you need to be careful because some landmarks may be very close.
Hi @qinliuliuqin ,
Thanks for the info. After observing the landmark mask labels, as you said it is indeed closer. For now I will start with some 8 classes and gradually increase it to see the perfomance. I have 2 more questions.
is_voxel_coordinate_valid
filtering them out. If my understanding is correct, this mean the landmark coordinates reaching out of the image ? Hi @epratheeban ,
Thanks for your follow-up questions. It seems you are making progress. As for you questions:
Feel free to let me know if you have any other follow-up questions.
Best, Qin
Hi @qinliuliuqin ,
The landmarks for some reasons are going out of the voxel region during the generation landmarks mask. The problem is that we have commerical tool for annotating the landmarks. Unfortunately nobody knows what coordinates system that the landmarks are in. I tried few different converting forth and back to LPS and world coordinates. Nothing worked so far.
Could you please advice me on these few things
By the way I loaded one of the nifti file in Slicer and tried to put the landmark as you can see in the image
The landmark point in RAS coordinates system is(-136.658, 234.276, 153.673)
and the exported LPS coordinates is [136.65760803222656, -234.27621459960938, 153.67263793945313]
But the annotated landmark from this commerical tool is (178.78, 98.14, 101.67). When I convert this coordinates to Voxel coordinates using the function gen_single_landmark_mask
then I get the voxel coordinates as (-21, 243, 117)
and which is an invalid landmark .
So now I'm confused with landmarks coordinates I should choose.
Sorry for the basic questions , I'm a noob in medical image processing.
thanks Pratheeban
Hi @epratheeban ,
No worries, the issue seems to be an easy one.
Let me know if you have other questions.
Best, Qin
Hi @qinliuliuqin
Thanks Qin. The picture explains how I have to annotate the landmarks. Probably I will start with converting RAS to coordinates to RAL. One last question in preprocessing step, some of the DICOM have Image origin which is not (0,0,0). In this case is it ok recenter them 0,0,0 or just go with existing one. Does this step is essential ?
Thanks again, Pratheeban
Hi @epratheeban ,
I would suggest that you keep the images as they were because discarding/changing image information might cause undesirable consequences. For example, without the correct origins, you can't properly align two medical images for visualization using ITK-SNAP. As long as you can record the correct world coordinates, manually changing image information would be unnecessary. Another solution you may consider is to directly record the voxel coordinates, and then convert them to world coordinates using SimpleITK. However, voxel coordinates might not be as accurate as world coordinates especially when voxel spacing is large.
Best, Qin
hi @qinliuliuqin
Thanks for you answer. It is indeed helpful, Here I'm sharing what I found. may be useful for someone reading this thread
For labelling landmarks in 3D, Our annotators find, it is easy to annotate it in slicer rather in ITK.
So they annotated a point in 3D on NiFTi image .The Nifti file
Image dimension = [512, 512, 218]
Image origin = [157.6910, 341.1910, -15.300]
Affine matrix = [[-1, 0, 0 ], [0, -1, 0], [0, 0, 1]]
The RAS coordinate for one landmark
RAS coordinate = [-141.057,236.523,163.852 ]
So converting this RAS to voxel coordinates directly from your code I got the below voxcel coordinates
voxcel coordinates = [299, 578, 179]
, this will become an invalid point as one of the dimension is 578 which is greater the original image dimension.
After your previous answer, I converted the RAS coordinates to RAI coordinates
Now the RAI coordinate is
RAI = [298.748,104.668,148.552]
converting the above to Voxcel coordinate using your script, I got
Voxcel coordinates = (449, 446, 164)
So I want visualize this voxel coordinates lying on right place loaded the landmask as a segmenation in slice
.
The green dot is segmentation mask lablel and which is above the Red one label.
Hence I decided to reset the original image origin to (0,0,0) and loaded the reoriented Nifti file in slicer parllely
I still got the bit of offset from the actual points. So I decided to label the resetted image origin nifiti image
Now the RAS point = [-298.701. -103.237, 178.236] to RAI = [298.701,103.237,178.236]
and the voxel coordinates is same as RAI coordiante here [298.701,103.237,178.236] because of recentering image origin to (0,0,0)
Now I again loaded the landmark mask generated and overlay on resetted nifti image, as you can see below the landmark coordinate is on the center of segmentation mask.
After much exploring this, I found that the only way to solve this is resetting the image origin of NiFTi file.
May be I am wrong. Please correct me ,If I have made any mistake.
Thanks
Pratheeban
Hi @epratheeban ,
Thanks for the detailed explanation. It seems you made a mistake while transforming the world coordinates to voxel coordinates. As shown below, the voxel coordinates should be (299, 105, 179) in your case. I also verified this in a notebook. Could you double check it? Thank you.
Best Qin
@qinliuliuqin
Thanks for the catch. There is something peculiar with slicer, by default it loads the image in RAS coordinates and the simple ITK works in LPS. I assumed that the Nifti file has also the same image origin as shown in the Slicer. But then it has different image origin. So fixing that worked . Thanks for the notebook. Everything is alright. Now I will start with annotation for rest of the labels.
@qinliuliuqin, thank you for publishing your code.
Thank you very much for publishing your code and sharing your research with the community. I found your work extremely valuable and insightful.
I noticed that while your paper discusses both landmark coordinate predictions and segmentation mask prediction, the code provided includes only an example for landmark coordinate prediction. Would it be possible for you to add an example case for segmentation as well?
I believe this addition would greatly benefit others who are interested in exploring the segmentation aspect of your work.
Thank you once again for your contribution and for considering this request.
Hi @sankardrbreaths ,
Thanks for your interest. Actually I have a 3D medical image segmentation code base (https://github.com/qinliuliuqin/Medical-Segmentation3d-Toolkit). It share the same segmentation backbone with this detection method. I separate the segmentation and detection into two repositories to make them more general. It's a great suggestion that I should merge them together to help others reproduce our results in the paper. I will do it once I get time!
Hi Qin Thanks for publishing your code. However there are few things I do not understand . Please reply back when you have time.
is there any specific reason for that ?