xamyzhao / brainstorm

Implementation of "Data augmentation using learned transforms for one-shot medical image segmentation"
MIT License
392 stars 91 forks source link

Regarding to the network architecture #15

Closed John1231983 closed 5 years ago

John1231983 commented 5 years ago

Thanks for sharing a good job!

I have a question about network architecture. As I understanding, given an input image I, you first feed it to the appearance network to change the intensity. Then feed the result of the appearance network to the registration network to register it with atlas.

Do you try to investigate another direction likes, first you feed the network to registration network, then the moved image feeds to the appearance network? Which one is better?

John1231983 commented 5 years ago

Sorry. I was wrong. You learn it separately.

However, it has one problem when you learn two network independently. As shown in block 4 figure 2, you feed the atlas image throws the appearance network and the output feed to the registration network. But the registration network is learned using an original atlas and target subjects (without adding appearance), so how it can work if we feed the output of appearance network to the registration network?

xamyzhao commented 5 years ago

Hi John! Fig. 2 block 4 depicts the synthesis process, where we are no longer doing any learning. At this stage, we sample an appearance transformation (which is just a per-voxel additive volume) and a spatial transformation (a 3D flow field). We add the appearance transformation volume to the atlas, and then warp that volume with a flow field. Hope that clears things up.

John1231983 commented 5 years ago

So the flow likes

(input, atlas)-->reg_net-->flow
(input, atlas)-->appearance net-->output
output-->warp(flow)-->output_moved
label-->warp(flow)-->label_moved

Am I right?

xamyzhao commented 5 years ago

That's correct! You can find the code that does this in src/segmenter_model.py: https://github.com/xamyzhao/brainstorm/blob/ac3489c1152d01e345c80741d1626741c124011e/src/segmenter_model.py#L577-L593