Hi professor @junyanz , I'm modifying the current repository 2d cyclegan into 3d cyclegan in order to use on medical image translation for different tracers with unpaired dataset. However, I tried various different configurations but most of the inferenced image I got are lack in details. I'm trying with larger --ngf, --ndf and larger patch size to see if the network can learn more details. Also I'm adding data augmentation like horizontal or vertail flip, -30 to 30 degree random rotation on x, y and z. Now, I'm going to see if changing the --netG from resnet to unet would help with details. Could you provide me any suggestions? Thank you.
Image 1: Real A( Real Image of Tracer A as the input of Inference)
Image 2: Real B(This is not the ground truth of Image 3, but it's the real image of Tracer B, the goal of Image I want)
Image 3: Fake B I got from inference from Image 1 (Result from epoch 180, Learning rate is keeping at 0.0002, learning rate will gradually decay to zero from epoch 200 to 400),
You can see that the Infered Image 3 I got is much lack in details comparing to Image 2 even after 180 epochs, the overall loss log seems very stable despite some normal spikes.
Hi professor @junyanz , I'm modifying the current repository 2d cyclegan into 3d cyclegan in order to use on medical image translation for different tracers with unpaired dataset. However, I tried various different configurations but most of the inferenced image I got are lack in details. I'm trying with larger --ngf, --ndf and larger patch size to see if the network can learn more details. Also I'm adding data augmentation like horizontal or vertail flip, -30 to 30 degree random rotation on x, y and z. Now, I'm going to see if changing the --netG from resnet to unet would help with details. Could you provide me any suggestions? Thank you.
Image 1: Real A( Real Image of Tracer A as the input of Inference)
Image 2: Real B(This is not the ground truth of Image 3, but it's the real image of Tracer B, the goal of Image I want)
Image 3: Fake B I got from inference from Image 1 (Result from epoch 180, Learning rate is keeping at 0.0002, learning rate will gradually decay to zero from epoch 200 to 400),
You can see that the Infered Image 3 I got is much lack in details comparing to Image 2 even after 180 epochs, the overall loss log seems very stable despite some normal spikes.