facebookresearch / hiera

Hiera: A fast, powerful, and simple hierarchical vision transformer.
Apache License 2.0
870 stars 39 forks source link

Doubt regarding Batch Size #22

Open owaisCS opened 1 year ago

owaisCS commented 1 year ago

In Table 14(a) and 14(b), you have mentioned batch size as 4096 for Pretraining while batch size of 1024 for Finetuning. Could you clarify if the said batch size is per GPU or global batch size using all GPUs in all nodes? Also the GPU size used to report above values.

Could the logs of pretraining and finetuning be made available?

dbolya commented 1 year ago

The batch sizes in the appendix are global unless stated otherwise, and so are the learning rates. So you can use any number of GPUs as long as the total batch size adds up to that number. We used a mix of A100 40gb and A100 80gb in our experiments, so most configs should work on GPUs with 40gb or below with 64 GPUs.

If you run out of memory, you can always use a lower per-gpu batch size while increasing the number of GPUs (which will be equivalent) or reducing the learning rate (which might not exactly reproduce the result due to training with AdamW) to compensate.

I'll look into seeing if we can release some training graphs.

owaisCS commented 1 year ago

Thank you for the response.

Kindly upload logs of pretraining and fine-tuning.