huzi96 / Coarse2Fine-PyTorch

70 stars 6 forks source link

Question regarding entropy of the inner most (z_1) layer #15

Closed danishnazir closed 1 year ago

danishnazir commented 1 year ago

Hi, First of all, thank you for open sourcing your code. It is also very intuative and easy to understand. I have one question regarding the following lines.

self.h1_sigma = nn.Parameter(torch.ones((1,32*4,1,1), dtype=torch.float32, requires_grad=False)) //require_grad is FALSE meaning this parameter is not learnable and fixed.
z1_sigma = torch.abs(self.get_h1_sigma)
z1_mu = torch.zeros_like(z1_sigma)

This means that the mu and sigma for the innermost layer (z) is fixed i.e. (0 and 1). However, in the paper you mentioned that sigma(z) is learnable. Can you explain on this?

Thanks.

huzi96 commented 1 year ago

Which file is it? In the training code I saw https://github.com/huzi96/Coarse2Fine-PyTorch/blob/921fcefda8205c9790c5172777a3ec371b2a6c7e/train/train.py#L376

danishnazir commented 1 year ago

Hi , thanks for replying. I saw in the networks.py file (https://github.com/huzi96/Coarse2Fine-PyTorch/blob/master/networks.py#L457) Thanks

huzi96 commented 1 year ago

Okay this network.py code is only suitable for inference. Since during testing time we always load the weights, it doesn't matter if the tensor is trainable. As you see in the training code it is set to be trainable.

danishnazir commented 1 year ago

yeah understood, thanks for the explaination.