yaoyao-liu / meta-transfer-learning

TensorFlow and PyTorch implementation of "Meta-Transfer Learning for Few-Shot Learning" (CVPR2019)
https://lyy.mpi-inf.mpg.de/mtl/
MIT License
731 stars 147 forks source link

Resnet Clarification #22

Closed JKDomoguen closed 4 years ago

JKDomoguen commented 4 years ago

Hi,in the pytorch implementation of the resnet backbone i.e. pytorch\models\resnet_mtl.py , I noticed that the residual block as well as the resnet backbone is different from it's tensorflow counterpart. For example, the resnet block contains only 2 3x3 Conv filter. And The number of filters starts from 160, 320 and 640. I'm sorry but can you please explain why is this different from its tensorflow version?

Thank you

yaoyao-liu commented 4 years ago

Hi @JKDomoguen,

Thanks for your interest in our work.

The TensorFlow implementation is what we use for the experiments in the paper. The PyTorch implementation is based on FEAT. So we use the same network architecture (a 25-layer ResNet) as FEAT. If you hope to use the same network architecture (ResNet-12) as the main paper, we can change the network architecture following MetaOptNet.

JKDomoguen commented 4 years ago

Oh that explains it. Thank you very much for your time and really great work! Enjoyed reading both MTL and your recent NeuRIPS paper LST.