microsoft / unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
https://aka.ms/GeneralAI
MIT License
20.28k stars 2.56k forks source link

Beitv2's self-supervised pre-training is slow #1368

Open zhanglaoban-kk opened 1 year ago

zhanglaoban-kk commented 1 year ago

I have 16000 images on my unlabeled data, the batch_size is set to 32, and it takes almost 20 minutes to train an epoch, what is the reason for that

pengzhiliang commented 1 year ago

@zhanglaoban-kk Thanks for your interest.

Batchsize 32 is the total batch size? If not, how about the total batchsize?

zhanglaoban-kk commented 1 year ago

here are the parameter sizes I set : def get_args():
parser = argparse.ArgumentParser('BEiT pre-training script', add_help=False) parser.add_argument('--batch_size', default=32, type=int) parser.add_argument('--epochs', default=500, type=int) parser.add_argument('--save_ckpt_freq', default=100, type=int)

pengzhiliang commented 1 year ago

ic, how many gpu cards do you use?

zhanglaoban-kk commented 1 year ago

Only used one A40

pengzhiliang commented 1 year ago

Ok, we can provide a reference point. It costs about 5 min with 16 V100s on ImageNet-1M (1.28M images), in our experiments.