Open VjayalakshmiK opened 4 years ago
Yes you're right it's not the same. While it is inconsistent I'm not sure it's actually 'wrong' per se; more a different way of doing it. You could try the precise BlockGAN approach but I'm not sure it would give particularly different results.
Hi, I understand, from the paper and supplemental, that the ResNet block used in this work has been borrowed from that of BigGAN paper. The same has also been clearly illustrated in Figure 8(a)-(c) in the supplemental(image attached below).
However, the ResNet block definition, as done in blocks.py of model.layers (screenshot of code segment attached), does not seem to match the sequence of operations illustrated in the diagrams.
While in block.py, the order, for channel a, is normalization1-> activation1 -> conv1 -> normalization2 -> activation2 -> conv2 -> down/up/identity, in BigGAN as well in Figure 8 of your supplemental, down/up/identity comes before conv1 and after activation1 (not at the end, as has been coded). This is true of channel b as well.
Would it be possible to clarify which of the two is right? If the code is wrong, it needs to be rectified right away, given many have shown interest in your work :)