Open exnx opened 2 years ago
https://github.com/google/automl/blob/master/efficientnetv2/main_tf2.py#L256
for stage in range(total_stages):
ratio = float(stage + 1) / float(total_stages)
start_epoch = int(
float(stage) / float(total_stages) * config.train.epochs)
end_epoch = int(ratio * config.train.epochs)
image_size = int(ibase + (train_size - ibase) * ratio)
if config.train.sched:
config.data.ram = ram_list[stage]
config.data.mixup_alpha = mixup_list[stage]
config.data.cutmix_alpha = cutmix_list[stage]
model.fit(
get_dataset(training=True, image_size=image_size, config=config),
initial_epoch=start_epoch,
epochs=end_epoch,
steps_per_epoch=steps_per_epoch,
callbacks=filter_callbacks([ckpt_callback, tb_callback, rstr_callback]),
verbose=2 if strategy == 'tpu' else 1,
)
Hi, I see a number of people asked questions about where the code for progressive learning was located exactly in the repo. I also couldn't find it. I was wondering if anyone was able to identify where these relevant parts are in the repo? I'd really appreciate it! Thank you.