gnosisyuw / CrevNet-Traffic4cast

Apache License 2.0
27 stars 6 forks source link

Is there any training tricks? #2

Closed toddwyl closed 4 years ago

toddwyl commented 4 years ago

image

The training process is too slow, it cost one hour per itr. When I set the batch_size to 64 in data MovingMnist, it get the bad performance. Is there any training tricks ?

My parameters are set as belows:

parser.add_argument('--lr', default=0.0005, type=float, help='learning rate')
parser.add_argument('--beta1', default=0.9, type=float,
                    help='momentum term for adam')
# parser.add_argument('--batch_size', default=16, type=int, help='batch size')
parser.add_argument('--batch_size', default=64, type=int, help='batch size')
parser.add_argument('--log_dir', default='logs',
                    help='base directory to save logs')
parser.add_argument('--model_dir', default='',
                    help='base directory to save logs')
parser.add_argument('--name', default='', help='identifier for directory')
parser.add_argument('--data_root', default='data',
                    help='root directory for data')
parser.add_argument('--optimizer', default='adam',
                    help='optimizer to train with')
parser.add_argument('--niter', type=int, default=60,
                    help='number of epochs to train for')
parser.add_argument('--seed', default=1, type=int, help='manual seed')
# parser.add_argument('--epoch_size', type=int, default=5000, help='epoch size')
parser.add_argument('--epoch_size', type=int, default=1200, help='epoch size')
parser.add_argument('--image_width', type=int, default=64,
                    help='the height / width of the input image to network')
parser.add_argument('--channels', default=1, type=int)
parser.add_argument('--dataset', default='smmnist',
                    help='dataset to train with')
parser.add_argument('--n_past', type=int, default=8,
                    help='number of frames to condition on')
parser.add_argument('--n_future', type=int, default=10,
                    help='number of frames to predict')
parser.add_argument('--n_eval', type=int, default=18,
                    help='number of frames to predict at eval time')
parser.add_argument('--rnn_size', type=int, default=32,
                    help='dimensionality of hidden layer')
parser.add_argument('--predictor_rnn_layers', type=int,
                    default=8, help='number of layers')
parser.add_argument('--beta', type=float, default=0.0001,
                    help='weighting on KL to prior')
parser.add_argument('--model', default='crevnet',
                    help='model type (dcgan | vgg)')
parser.add_argument('--data_threads', type=int, default=5,
                    help='number of data loading threads')
parser.add_argument('--num_digits', type=int, default=2,
                    help='number of digits for moving mnist')
nBlocks=[4, 5, 3]
nStrides=[1, 2, 2]

And I combine the optimizer between frame_predictor and encoder:

frame_predictor_optimizer = opt.optimizer(
    frame_predictor.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
encoder_optimizer = opt.optimizer(
    encoder.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
scheduler1 = torch.optim.lr_scheduler.StepLR(
    frame_predictor_optimizer, step_size=50, gamma=0.2)
scheduler2 = torch.optim.lr_scheduler.StepLR(
    encoder_optimizer, step_size=50, gamma=0.2)

to:

combine_optimizer = opt.optimizer(
    chain(frame_predictor.parameters(), encoder.parameters()), lr=opt.lr, betas=(opt.beta1, 0.999))
combine_scheduler = torch.optim.lr_scheduler.StepLR(
    combine_optimizer, step_size=50, gamma=0.2)

Is there any wrong ?

gnosisyuw commented 4 years ago

I think if you use the default setting, then each iteration will take 20-30mins on 2080TI or Titian V. The whole training process will take ~one day. For MovingMNIST, please use the code provided on ICLR page.

But if you want to use larger batch size, you should correspondingly change the learning rate. I am not sure why your generation looks so messy, our method is quite robust to the change of hyperpararmeters. I guess there is something wrong with the architecture. Did you build additional decoder or so?

As for the combination of optimizer. Usually I would use something like below. I don't know if your version works.

optimizer = optim.SGD(
        [{'params': model.parameters()}, {'params': metric.parameters()}, {'params': Header.parameters()}], lr=0.001,
        momentum=0.9, weight_decay=1e-5)
toddwyl commented 4 years ago

I think if you use the default setting, then each iteration will take 20-30mins on 2080TI or Titian V. The whole training process will take ~one day. For MovingMNIST, please use the code provided on ICLR page.

But if you want to use larger batch size, you should correspondingly change the learning rate. I am not sure why your generation looks so messy, our method is quite robust to the change of hyperpararmeters. I guess there is something wrong with the architecture. Did you build additional decoder or so?

As for the combination of optimizer. Usually I would use something like below. I don't know if your version works.

optimizer = optim.SGD(
        [{'params': model.parameters()}, {'params': metric.parameters()}, {'params': Header.parameters()}], lr=0.001,
        momentum=0.9, weight_decay=1e-5)

Thanks, I slove the issue with the separate optimizer and your scheduler. I think maybe I change the num of epoch small one itr that make the learning rate small by scheduler .

Yeoninm commented 3 years ago

Hi ,May I ask how performance you are on MovingMnist? I use code provided on ICLR page, but i get a bad performance
Z A Thank for your answer!