Open Levishery opened 2 years ago
Hi @Levishery, thanks for reporting the performance issue! We will investigate it and get back to you. Basically, without AUGMENTOR.SMOOTH, the object masks after nearest interpolation will have very coarse boundaries.
Merry Christmas and Happy New Year!
Thank you very much for your contributions! :)
I'm implementing MALA's network in this pipeline. It saves memory by using convolution without padding, therefore can afford a larger input size during training (for example [64, 268, 268] with batch size 4 on a single GPU).
However, the data loading time became unaffordable under this input size, where 90% of the time is spent on data-loading. I found that this is caused by SMOOTH, the post-process of the label after augmentation.
I wonder if you are aware of this? Will discarding smooth influence training much?
Merry Christmas :)