cyclomon / 3dbraingen

Official Pytorch Implementation of "Generation of 3D Brain MRI Using Auto-Encoding Generative Adversarial Network" (accepted by MICCAI 2019)
MIT License
126 stars 38 forks source link

Generator loss is different from the original article in Alpha_WGAN_ADNI_train.ipynb notebook. #11

Open ShadowTwin41 opened 3 years ago

ShadowTwin41 commented 3 years ago

In https://arxiv.org/pdf/1908.02498.pdf article the generator loss is just calculated using the d_loss and the l1_loss. The c_loss is just used in lossCodeDiscriminator calculation. Please, let me know if what I said is correct.

elevenjiang1 commented 2 years ago

So,have you test which is better? I am now using this work to generate 3D voxel data,but I find that it can not get a good result,but I don't know where error...

ShadowTwin41 commented 2 years ago

I have changed the loss functions of the generator and the discriminator. I recommend you to check if there is a mode collapse (when the discriminator or the generator wins) and look for more work on 3D regeneration. Have you tried using spectral normalisation? That can be a big improvement. Anyway, I used the c_loss as in this implementation.

elevenjiang1 commented 2 years ago

Thanks for your reply. I change loss function,remove c_loss in loss1.But still find the effect is very bad !! I train on ModelNet40_normalized data,only chair class, 3090GPU train for whole morning,but the result is still very bad.Finding that model should in eval() mode can make different noise input and different shape output(but no use in 3D GAN).Is the 3D object Generation difficult than MRI data generation?or maybe some trick I don't know?

Here are my result: 899-res

Before I use 3D GAN and also find the output is mode collapse(different noise input and output same voxel),If I find the problem,what can I do to solve it?Train more times on Generator and less on Discriminator?

Thank you again~

ShadowTwin41 commented 2 years ago

The complexity can also be in the resolution of the images. If you have a high resolution, it will be most difficult to maintain the stability of the training. I think you should have more than one-morning training. This architecture requires a lot of computing power and time. For example, in one of my works, I trained it for 6 days and the results improved exponentially. To solve the mode collapse problem, you need to find out which model is winning and reduce its "power", for example by changing the learning rate and the number of updates per iteration.