cwmok / LapIRN

Large Deformation Diffeomorphic Image Registration with Laplacian Pyramid Networks
MIT License
121 stars 21 forks source link

atlas #17

Closed cy-yong closed 2 years ago

cy-yong commented 2 years ago

image I'm sorry to bother you again, but I would like to ask whether the Atlas in the paper was selected by myself, is it a template image with labels?

cwmok commented 2 years ago

Yes, it is just an arbitrary image scan with anatomical segmentation.

cy-yong commented 2 years ago

image Can I ask you two questions? First, is your fixed image with anatomical segmentation in the registration process? Second, if my fixed has a brain parcellation label as above, will it also be registered according to the label when registering?

cwmok commented 2 years ago
  1. Our method is fully unsupervised, e.g., we didn't we any anatomical segmentation during training.
  2. Yes, you can register the brain parcellation label to the atlas space using the same deformation field.
cy-yong commented 2 years ago
  1. 我们的方法是完全无监督的,例如,我们在训练期间没有进行任何解剖分割。
  2. 是的,您可以使用相同的变形字段将大脑分割标签注册到图集空间。

When I finished the training, I tried to register with the fixed image with the brain parcellation, but the result was that there was no similar brain parcellation.

cy-yong commented 2 years ago
  1. 我们的方法是完全无监督的,例如,我们在训练期间没有进行任何解剖分割。
  2. 是的,您可以使用相同的格式将分割标签注册到图集空间。

训练结束后,我尝试用大脑分区的固定图像进行注册,但结果是没有类似的大脑分区。

image this is result!

cwmok commented 2 years ago

Hi @cy-yong,

This seems to be a model collapse in your training.

In your previous issue, I found that the training with your dataset is highly unstable. This may be a potential issue with your preprocessing pipeline.

I remember you tried to use the Freesurfer for skull stripping in your dataset but failed. Recently, I came across a method called "SynthStrip" which may work for your dataset.

I left the link to SynthStrip here: https://surfer.nmr.mgh.harvard.edu/docs/synthstrip/

I highly recommend you follow the standard preprocessing pipeline mentioned in our paper (at least perform brain extraction+ skull stripping + N3 bias field correction).

Feel free to let me know the result.

cy-yong commented 2 years ago

Hi @cy-yong,

This seems to be a model collapse in your training.

In your previous issue, I found that the training with your dataset is highly unstable. This may be a potential issue with your preprocessing pipeline.

I remember you tried to use the Freesurfer for skull stripping in your dataset but failed. Recently, I came across a method called "SynthStrip" which may work for your dataset.

I left the link to SynthStrip here: https://surfer.nmr.mgh.harvard.edu/docs/synthstrip/

I highly recommend you follow the standard preprocessing pipeline mentioned in our paper (at least perform brain extraction+ skull stripping + N3 bias field correction).

Feel free to let me know the result.

image I have try it, I downloaded each of these scripts and run them ,An error like the one above appears.I don't know if it is the lack of Freesurfer or if I have to do something else with Docker or Singularity Container!

cwmok commented 2 years ago

For the Docker or Singularity wrapper, you need to first install "Docker" or "Singularity" in your system.

https://docs.docker.com/engine/install/ubuntu/ https://docs.sylabs.io/guides/3.0/user-guide/installation.html

"Docker" and "Singularity" are basically software to help you to build a virtual environment in your system. They have special commands to execute the scripts of SynthStrip.

Moreover, have you tried VoxelMorph for your dataset? Is the training stable? It seems to be a reasonable baseline for your application.

cy-yong commented 2 years ago

对于 Docker 或 Singularity 包装器,您需要首先在系统中安装“Docker”或“Singularity”。

https://docs.docker.com/engine/install/ubuntu/ https://docs.sylabs.io/guides/3.0/user-guide/installation.html

“Docker”和“Singularity”基本上是帮助您在系统中构建虚拟环境的软件。他们有特殊的命令来执行 SynthStrip 的脚本。

此外,您是否为您的数据集尝试过 VoxelMorph?训练稳定吗?这似乎是您的应用程序的合理基线。

I have not tried VoxelMorph for my dataset, because I found that the data loading of VoxelMorph seems to be complicated. May I ask whether the predecessor tried to use VoxelMorph registration as the comparison object of your method?

cwmok commented 2 years ago

You should definitely try out the VoxelMorph as well. This paper also compares our method with their methd.

cy-yong commented 2 years ago

您绝对应该尝试一下 VoxelMorph。本文还将我们的方法与他们的方法进行了比较。 image

I will try VoxelMorph later, but before that I need to finish the first experiment with your method, but I still have some problems now, I don't know whether your method can complete the registration of brain parcellation as abrove?

cwmok commented 2 years ago

Is it a mono-modal registration task? If yes, our method can apply to this task. From your attached figure, there is a huge discrepancy in image contrast.

You may need a more robust similarity measure such as the dice score of the brain parcellation label to guide the training, see here for our semi-supervised implementation. MSE will not work in cases with large contrast differences.

cy-yong commented 2 years ago

Is it a mono-modal registration task? If yes, our method can apply to this task. From your attached figure, there is a huge discrepancy in image contrast.

You may need a more robust similarity measure such as the dice score of the brain parcellation label to guide the training, see here for our semi-supervised implementation. MSE will not work in cases with large contrast differences.

I don't seem to understand what you mean by the Mono-modal registration Task, so this image is the T1 image, the W image, different colors indicate different areas, For example, Cerebrospinal Fluid (CSF), bone marrow, LV, Skull, Image Background, WM and GM, if the template image has this region segmentation, can we register it by region when we register it?

cwmok commented 2 years ago

T1 to T1 registration is a mono-modal registration task. But you have to care for the regions with inconsistent intensity values, e.g., the outer shell of the brain in the target image is in white color, but the outer shell of the brain in the target image is in black.

For example, Cerebrospinal Fluid (CSF), bone marrow, LV, Skull, Image Background, WM and GM, if the template image has this region segmentation, can we register it by region when we register it?

What do you mean "register it by region"? If only the template image has segmentation, you cannot train it in a semi-supervised fashion. I believe using NCC to train your model will work (make sure the background of each scan has the 0 image intensity).

cy-yong commented 2 years ago

T1 到 T1 配准是单模态配准任务。但是您必须注意强度值不一致的区域,例如,目标图像中的大脑外壳为白色,而目标图像中的大脑外壳为黑色。

比如脑脊液(CSF)、骨髓、LV、Skull、Image Background、WM和GM,如果模板图像有这个区域分割,我们注册的时候可以按区域注册吗?

你是什​​么意思“按地区注册”?如果只有模板图像具有分割,则无法以半监督方式对其进行训练。我相信使用 NCC 来训练您的模型会起作用(确保每次扫描的背景具有 0 图像强度)。

Since I see your paper is Large Deformation Diffeomorphic Image Registration, my purpose is: The template image has the brain parcellation label, while the target image does not have the brain Parcellation label. Through the mapping relationship between the target image and the template image, Inversely map the brain Parcellation label of the template image to the brain Parcellation label of the target image. Can you see what I mean?

cwmok commented 2 years ago

I get it now. Yes, I think back-propagating the brain parcellation label using the resulting deformation field will work. But you just need to make sure the registration result is accurate.

You may also check out this paper: https://www.cell.com/neuron/fulltext/S0896-6273(02)00569-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS089662730200569X%3Fshowall%3Dtrue It demonstrated how to use image registration to segment the whole brain.

cy-yong commented 2 years ago

我现在明白了。是的,我认为使用生成的变形场反向传播大脑分割标签会起作用。但是您只需要确保注册结果准确无误。

您还可以查看这篇论文:https ://www.cell.com/neuron/fulltext/S0896-6273(02)00569-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii% 2FS089662730200569X%3Fshowall%3Dtrue 演示了如何使用图像配准来分割整个大脑。

So you're saying that I can use your method, reverse map the brain Parcellation labelof the template image to get the brain Parcellation label of the target image, right?

cwmok commented 2 years ago

I believe so. But I haven't tested it before.

cy-yong commented 2 years ago

我相信是这样。但我之前没有测试过。

Since the paper is Large Deformation Diffeomorphic Image Registration, I think it may be feasible, and I will try it. If there are any problems, I would be very grateful if the predecessors can answer them.

cy-yong commented 2 years ago

I have done skull stripping, but I do not know how to do Brain Extraction and N3 bias field correction?

cwmok commented 2 years ago

After skull stripping, you should have the whole-brain image (brain extraction). Remember to check the background intensity. N3 bias field correction can be done using ITK (see https://simpleitk.readthedocs.io/en/master/link_N4BiasFieldCorrection_docs.html)

cy-yong commented 2 years ago

image I'm sorry to bother you again, but I would like to ask whether the Atlas in the paper was selected by myself, is it a template image with labels?

I would like to ask where can I get this Atlas, is it the image with Brain Parcellation label?

cwmok commented 2 years ago

Here it is. https://github.com/adalca/medical-datasets/blob/master/neurite-oasis.md

cy-yong commented 2 years ago

When I finished the training, I used the template image with brain Parcellation label as fixed image and my data as moving image, but I found that I could not get any result.

cwmok commented 2 years ago

Could you show me the fixed image, moving image and warped image? Is the model properly trained?

cy-yong commented 2 years ago

Result.zip this is fixed image,movingimage and warped image!

cwmok commented 2 years ago

There are many mistakes in your approach.

  1. You should not use the segmentation label to serve as the fixed image.
  2. Your moving is in RAI orientation, but the label is in LPI orientation.

Correct ways:

  1. Train the model using MR scans (not the segmentation label)
  2. During testing, you may set the template image as the moving image and register it to an image with no label.
  3. If the registration result is good, you need to change the test script to propagate the atlas label to the fixed image.
  4. "F_X_Y" in the test script denotes the deformation field. Use "F_X_Y" to warp the segmentation label.

Alternatively, you can set the template image as the fixed image. In this case, you will need the velocity field "F_xy". Negate "F_xy", i.e., "-1 * F_xy" and integrate it from time 0 to 1 using "DiffeomorphicTransform_unit(time_step=7).cuda()" to obtain "F_YX". Use "F_YX" to transform the label image.

cy-yong commented 2 years ago

Result.zip This is my model, fixed image,movingimage and warped image!

cy-yong commented 2 years ago

Result.zip This is my model, fixed image,movingimage and warped image!

I found that there was something wrong with it, but I didn't know what the problem was.

cwmok commented 2 years ago

Which one is the fixed image? The "031768-label.nii"?

cy-yong commented 2 years ago

Which one is the fixed image? The "031768-label.nii"?

yes!

cwmok commented 2 years ago

You seem not to understand what I have said. Maybe we can talk on WeChat/schedule a short meeting with you? What do you think?

You could send me your Wechat ID to me via email.

cy-yong commented 2 years ago

You seem not to understand what I have said. Maybe we can talk on WeChat/schedule a short meeting with you? What do you think?

You could send me your Wechat ID to me via email.

I have sent my Wechat ID to your via email, Have you received my email?

cwmok commented 2 years ago

I didn't receive your email. Could you double-check your sending address? My email address is cwmokab 'at' connect 'dot' ust 'dot' hk.

cy-yong commented 2 years ago

I didn't receive your email. Could you double-check your sending address? My email address is cwmokab 'at' connect 'dot' ust 'dot' hk.

There was something wrong with the previous email address. I reconfirmed and resend my ID.

cy-yong commented 1 year ago

Dear cwmok   This is my wechat ID:18896146930;   I look forward to further communication with you.  chenyong

------------------ 原始邮件 ------------------ 发件人: "cwmok/LapIRN" @.>; 发送时间: 2022年8月12日(星期五) 下午2:26 @.>; @.**@.>; 主题: Re: [cwmok/LapIRN] atlas (Issue #17)

You seem not understand what I have said. Maybe we can talk in WeChat/schedule a short meeting with you? What do you think?

You could send me your Wechat ID to me via email.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>