Hi ! The research in this paper is an excellent work
By the way, I would like to ask a question about "htc++ beitv2+obj365".
"For the models with Objects365 pre-training, we first pre-train for 26 epochs, then fine-tune it for 20k iterations using 32 A100 GPUs with a total batch size of 64 (i.e., 2 image/GPU)." .
What is the configuration file for "pretraining for 26 epochs first"?
Hi ! The research in this paper is an excellent work
By the way, I would like to ask a question about "htc++ beitv2+obj365".
"For the models with Objects365 pre-training, we first pre-train for 26 epochs, then fine-tune it for 20k iterations using 32 A100 GPUs with a total batch size of 64 (i.e., 2 image/GPU)." .
What is the configuration file for "pretraining for 26 epochs first"?