Closed ghazi-f closed 6 years ago
Hi, you can find it in context encoder's paper Supplementary Material. they use 4000 units' bottleneck layer instead of channel-wise FC.
Thanks a lot ! I didn't notice it. I'm closing this issue. This may not be the right place to ask this, but please allow me : Did you understand why it was removed in the inpainting implementation (and kept in the feature learning one) ? Was it a purely empirical decision ?
https://github.com/pathak22/context-encoder/issues/10 that maybe help you : )
He says it is simply not required ... well Ok. Thank you for the fast answer @qq519043202 !
Hi, Thanks a lot for your implementation ! I can see you have your channel wise fully connected layer function in your Model.py but you're not using it in building your generator. Can I know why ?