xtudbxk / DSRG-tensorflow

a tensorflow version for DSRG (Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing)
75 stars 20 forks source link

Loss calculation is too slow #18

Closed yassouali closed 4 years ago

yassouali commented 4 years ago

Hi,

First of all, thank you very much for providing the tf implementation. I am trying to recreate the results in PyTorch, but I have very slow training time due to the consistency loss computation, which uses pydensecrf, and it takes a lot of time for a moderately sized images (~ 356). Do you have any recommendations on how to speed things up, and can you please provide the time it took to train the network for in your case.

Thank you very much.

xtudbxk commented 4 years ago

Sorry for the late reply.

In practice, we always directly use output featmap, which size is much smaller than input image ( 41x41 vs 321x321 in our case), in densecrf model during training process to save time. And during testing process, we scale the output featmap to the size of input image first, then use densecrf to smooth the output prediction.

And I also hear that there is a project in github which implements densecrf with GPU. If the above strategy is not compatiable with your case, you can use this project to boost your experiment.

What's more, in my project, the total training time is about 15 hours in Nvidia 1080Ti with 20 epoches.

yassouali commented 4 years ago

Thank you, I think the project you are referencing is (https://github.com/heiwang1997/DenseCRF), I tried writing a Python wrapper but I had some issues when integrating cuda with setuptools.