loeweX / Greedy_InfoMax

Code for the paper: Putting An End to End-to-End: Gradient-Isolated Learning of Representations
https://arxiv.org/abs/1905.11786
MIT License
283 stars 36 forks source link

Resnet Encoder Layer Numbers #12

Closed rschwarz15 closed 4 years ago

rschwarz15 commented 4 years ago

With the Pre-activation ResNet Encoders that are used, my understanding of the layer numbers doesn't align with how you've labelled them.

https://github.com/loeweX/Greedy_InfoMax/blob/8f91dc27fcc6edf1f5b9f005a9f5566bb796dce2/GreedyInfoMax/vision/models/FullModel.py#L27-L36

The total number of blocks in block_dims is 37 and there is also the initial conv1. When using PreActBlockNoBN, 2 layers per block, does this not result in a ResNet75? When using PreActBottleneckNoBN, 3 layers per block, does this not result in a ResNet112?

Please let me know if I've misunderstood something.

loeweX commented 4 years ago

Good catch! Indeed, it should be:

block_dims = [3, 4, 6] 
num_channels = [64, 128, 256] 

(Since we're only considering the first four blocks of the ResNet architecture)

Could you submit a pull request?