Open machanic opened 4 years ago
Hi, sorry that is another debug code here, we don't need to use so much epoch, because it will autosave the checkpoint so we did not change the maximum epochs here. Usually, it takes about 2-4 hours for the tiny-imagenet training, once the training loss does not drop it will be fine.
In https://github.com/dydjw9/MetaAttack_ICLR2020/blob/master/meta_training/imagenet_meta_training/imagenet_train.py#L152 It sets
--epoch 254600
, even infor epoch in range(args.epoch//100):
code It has 2546 epochs to be trained. Why so large? Does it cause over-fitting?