Open jayurbain opened 4 years ago
Can you try removing the minus sign from this line and see if it improves the result? https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/blob/7471e81923741aa1b3593e4e95fc090c2e96c8cc/model.py#L143
Also, is there a reason why the MRI scans in the second figure are blurry? Are they representing the output of VAE branch?
Will do.
Thanks, Jay
On Tue, May 19, 2020 at 5:41 AM Suyog Jadhav notifications@github.com wrote:
Can you try removing the minus sign from this line and see if it improves the result?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/issues/45#issuecomment-630738197, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4OPUXQ7AQFQRBCANRV76LRSJO5ZANCNFSM4NELKLVA .
Yes. The second figure are the generated VAE images. I should have labeled these.
I have another question. Using SimpleITK, your notebook example reads the images in as C, D, H, W (4,155,240,240).
The model specifically states that it expects the data as C, H, W, D. Should this matter? I don't think so as long as you're consistent.
Thanks, Jay
On Tue, May 19, 2020 at 5:42 AM Suyog Jadhav notifications@github.com wrote:
Also, is there a reason why the MRI scans in the second figure blurry? Are they representing the output of VAE branch?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/issues/45#issuecomment-630738593, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4OPURZXQV3NLOCEKVKKRDRSJPAVANCNFSM4NELKLVA .
That might be a problem, I created the notebook after I had verified the model to be working on Brats18 dataset, so it might be the case that I accounted for that before but forgot to write the code for it in the notebook.
I'll move the axis and give it a try.
Thanks, Jay
On Tue, May 19, 2020 at 7:03 AM Suyog Jadhav notifications@github.com wrote:
That might be a problem, I created the notebook after I had verified the model to be working on Brats18 dataset, so it might be the case that I accounted for that before but forgot to write the code for it in the notebook.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/issues/45#issuecomment-630774047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4OPUVGMY5FPDVWZXHQXULRSJYSPANCNFSM4NELKLVA .
I have additionally tried each of the following: 1) Modified the loss to: return - K.mean(2 intersection / dn, axis=[0,1]) 2) Modified the loss to: return 1 - K.mean(2 intersection / dn, axis=[0,1]) // output shown in image 3) Changed order of dimensions in BraTS from C, D, H, W to C, H, W, D
For these latest experiments I used the following dimensions: image.shape (4, 128, 128, 96) label.shape (3, 128, 128, 96)
No improvement. The model is just not learning.
Any further thoughts would be appreciated. The biggest problem, I believe are with the Dec_GT_Output_dice_coefficient and the Dec_GT_Output_loss. In other words, the basic learning task. What am I missing that you used to run on BraTS? In addition, I ran your example notebook on 50, 100, and 150 and get the same poor results.
I'm thinking I should just try to learn a 3D Unet without the VAE or at least set its weights very low. Get that to work, then try to add the VAE back in.
Thanks, Jay
I would appreciate some feedback on my training curves. Looks like over-fitting which I thought the VAE would address. L2 and KL are set to 0.1 for the VAE in the loss function. The 'Dec_GT_Output_loss' and 'Dec_GT_Output_dice_coefficient' flat line at zero after ~100 epochs. The VAE portion seems to eventually improve.
After 50 epochs:
Here's an example ground truth:
And the corresponding prediction. Not very good.
After 100 epochs:
After 150 epochs:
Thanks, Jay