alinlab / L2T-ww

Learning What and Where to Transfer (ICML 2019)
MIT License
250 stars 48 forks source link

Could you specify details of TinyImageNet-ResNet32 to CIFAR100-VGG9 #6

Closed jnhwkim closed 4 years ago

jnhwkim commented 4 years ago

Hi, I would like to make sure some implemental details on TinyImageNet-ResNet32 to CIFAR100-VGG9 section including:

  1. The architecture of ResNet-32 (how to modify stem conv; others could be found in Figure 3)?
  2. The architecture of VGG-9 (how to modify the classifier, use BN)?
  3. Any change of the other hyperparameters, the learning rate is just fixed to 0.1 even though the batch size is 2x increased, an input image is smaller, and the capacity of networks is decreased?
  4. How to set aside validation splits for CIFAR-100 and STL-10?
  5. Please specify the augmentation strategy for 32x32 regimes.
hankook commented 4 years ago
  1. We used the conventional residual networks used in the original ResNet paper.
  2. We used the same architecture as the Jacobian Matching paper. (we used BN)
  3. We used the same hyperparameters as described in the appendix in our paper. You should modify --pairs argument because VGG9 has a different number of layers.
  4. In both cases, we sampled 10% validation samples from the training datasets.
  5. We used the conventional augmentation scheme (random padding and flip.)
jnhwkim commented 4 years ago

Thanks for the detail. I close the issue.