xthan / VITON

Code and dataset for paper "VITON: An Image-based Virtual Try-on Network"
534 stars 143 forks source link

Regarding Activation Function #39

Closed akshay951228 closed 5 years ago

akshay951228 commented 5 years ago

leaky relu work great compare to relu , but in vition stage1 network for encoder activation function leaky rely and for decoder expect last layer activation relu. can I know reason why relu for decoder network not leaky relu. I have googled a lot , but didnot find any good explanation

xthan commented 5 years ago

This is just a design choice. I do not think leaky ReLu and ReLU have significant performance difference. I followed the code of pix2pix for a fair comparison. Using all ReLUs should also be fine --- BigGAN: https://arxiv.org/pdf/1809.11096.pdf uses all ReLUs, while StyleGAN http://stylegan.xyz/paper uses some leaky ReLUs.