Closed bhack closed 5 years ago
I.e. Check https://arxiv.org/abs/1806.02658
Yes, it could be. I think that's kind of controversial because that post focuses more on RGB images, I mean as a results, where those artifacts matter more than in semantic segmentation.
I see different architectures with upsamplings and with deconvolutions but I do not see any community agreement. I edited the code now with the upsampling layer (I did not deleted the traspose convolution code). Nevertheless, Deeplabv3 architecture relies on the upsampling in stead of the convolutions, so I guess as MNasnet is also from Google, I'll go with the upsampling haha
What do you think?
Yes Deeplabv3 Deeplabv3+ and follow-up https://arxiv.org/abs/1809.04184 use up sampling.
In MNasnet paper segmentation was in "future" section so there is any reference implementation. Please check on the last work I mentioned if you find something interesting. I've also mentioned your work in https://github.com/jhfjhfj1/autokeras/issues/81
Sure!, thanks a lot! :D
If you are interested the mentioned arxiv paper code is available at https://github.com/tensorflow/models/pull/5430
Are the deconv (transpose conv) going to create artifacts in the decoder?
https://distill.pub/2016/deconv-checkerboard/ (there are many mentions to this 2016 paper)