Closed takerujason closed 4 years ago
Hi @JCFu,
The 3D volumes we used in our experiments were made up of multiple 2D slices and we only used the center slice which means we picked out the values from channel 120 in our 3-dimensional matrices (i.e. each brain volume).
Some image registration algorithm might be needed to align each brain image to some template, in order to get a good result when training the CycleGAN later on.
Regards, Simon
@simontomaskarlsson Thank you!
Hi @simontomaskarlsson ,
Could you provide the tool of stripping skull and doing preprocess which you used in this work?
I try to strip skull and normalize for MR-T2 via freesurfer
, but the result is so bad.
Thanks a lot!
Hi @JCFu,
In our work, dicom images were initially registered and then cropped a little bit. We used matlab and something similar to this: https://se.mathworks.com/help/images/registering-multimodal-mri-images.html
Thank you @simontomaskarlsson , I will try the method you provided.
Hi @simontomaskarlsson , There is another issue, how to run the method of Generator, CycleGAN and CycleGAN_s that mentioned in the paper. What’re the network architectures of them?
Thanks!
Hi @JCFu,
When running the code the whole CycleGan model, e.i. two generators and two discriminators, is set up and trained automatically. You do not need to run any separate methods, just follow the instructions. See rows 327-357 for the generator model - together with help function on rows 248-291.
Hi @simontomaskarlsson ,
The synthesis image has been generated by your good method. But how to print relative error between real images and synthetic images like fig.2 in the paper?
I have looked for the code of relative error on the internet, but no result. Could you provide the code or guidance?
Thank you so much!
Hi,
Can't seem to find the code I wrote, but the relative error is basically just (real_img - synt_img)/real_img
Regards, Simon
Hi @simontomaskarlsson ,
I am a newcomer in this field. I encountered a problem in the first step. So I would like to get your help.
I notice that 900 T1 weighted and 900 T2 weighted MR images are used in your experiment. It's not a few amount of data. There are many 3D brain MR images from some public datasets that I have collected. If I manually transform 3D images into 2D images one by one, it will take a lot of time. I also tried to use some GUI softwares, such as Mango and mricro, but the size of converted images is distorted or incomplete display.
So,
Could you provide some good ideas to batch intercept the axial of each brain image? If not, could you recommend a software and its brief steps that can intercept 2D images one by one?
Should I preprocess the original images, such as motion correction and conform?
Thank you very much!