switchablenorms / CelebAMask-HQ

A large-scale face dataset for face parsing, recognition, generation and editing.
2.05k stars 343 forks source link

Question about BN in the deconder #23

Closed XiaoqiangZhou closed 4 years ago

XiaoqiangZhou commented 4 years ago

Hi, it seems you didn't use ReLU and BN in the decoder side. Did you implement in this way purposely?

In the defination of unetUp, self.conv = unetConv2(in_size, out_size, False), where False means is_batchnorm=False

Thanks.

XiaoqiangZhou commented 4 years ago

Besides, there is no softmax or tanh operation after the final convolution. Is it correct?

steven413d commented 4 years ago

Hi, thank you for help us find the problem.

  1. This version misses BN in the decoder part. I will train and update a new one.
  2. torch.nn.functional.cross_entropy have already contained softmax function.
XiaoqiangZhou commented 4 years ago

@steven413d got it! Thanks~

XiaoqiangZhou commented 4 years ago

@steven413d Could you please explain what role dose self.G.train() play in trainer.py, line 82?
I can't find the defination of function train in class 'unet'.
Does this function is inherited from nn.Module? What will happen if we remove self.G.train()? Thanks for your patience~

steven413d commented 4 years ago

https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch But model.train() is default mode. I think removing it is OK.

XiaoqiangZhou commented 4 years ago

@steven413d Thanks for your patience~