Closed jlevy44 closed 5 years ago
Yes, I think it will be a memory issue. If I run it on my laptop 6gb(1060ti), it still can't process 8x265x256.
Btw, how many parameters are there using torchsummary? And did you try with batch size 4?
Yeah I just reduced to batch size of 4. I'll see how it goes.
I'll check parameters.
On Fri, Jun 14, 2019 at 4:13 PM Malav Bateriwala notifications@github.com wrote:
Yes, I think it will be a memory issue. If I run it on my laptop 6gb(1060ti), it still can't process 8265256.
Btw, how many parameters are there using torchsummary? And did you try with batch size 4?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bigmb/Unet-Segmentation-Pytorch-Nest-of-Unets/issues/4?email_source=notifications&email_token=AEWJCZ5KABJ6CPEHQFAQOQTP2NHKZA5CNFSM4HYFPTJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXWCENQ#issuecomment-502014518, or mute the thread https://github.com/notifications/unsubscribe-auth/AEWJCZYBYE2SDITZJZ2MEP3P2NHKZANCNFSM4HYFPTJA .
-- Joshua J. Levy B.A. Physics '17 University of California, Berkeley joshualevy44@berkeley.edu | cell: (925)457-5752
Looks like batch size 4 did not work. I think 2 might work
Have you benchmarked the training time on these algorithms? Already, unet is taking a long time.
Unet doesn't perform well in Stress test. But can you check it with reducing the no. of filters? I have tested a lot with this nets, Unet with 12gb GPU(8x128x128) takes 1min 30 sec for each training. Nested Unet will be double.
What is the size of your dataset? I have about 60k 512x512 patches. I dropped the number to ~10k been taking a long time to process. Been a couple of hours and it's about to finish its first epoch.
My dataset was 5K 256x256 images. Are you using the just the net or the training method also. If you are using the training loop, dont uncomment the gradient flow algo.
Also try to run the data by resizing it to 96*96 just for checking.
Nice set of models. I'm training segmentations of images of the dimension (w/ batch size) 32x3x512x512 with 4 output classes. For some reason I am getting memory errors using the nested unet. (12 Gb GPU)
Is this what you'd expect to have happen given the data size and model architecture?