shanice-l / gdrnpp_bop2022

PyTorch Implementation of GDRNPP, winner (most of the awards) of the BOP Challenge 2022 at ECCV'22
Apache License 2.0
229 stars 49 forks source link

training time relevant factors (decreasing training time) #115

Closed freLorbeer closed 1 month ago

freLorbeer commented 6 months ago

Hi, thanks a lot for your work!

I am trying to train on my own custom data in BOP Format. I used 10k images with around 14k instances in total of my object of interest (i train only on one single object).

The approximated training time ranges between 8 and 15 days depending on my configurations on batch_size, num_workers, persistent_workers. But training on the tless dataset on a single object takes around 18-24 hours on the same hardware with around 22k images in the same resolution. What are the relevant factors that i miss to improve the training time? I used the same configuration parameters for my dataset than on the tless dataset.

Can I precompute anything or so?

shanice-l commented 1 month ago

Sorry for the late reply. I think there must be some technical problem. GDRNPP training should be much more faster.