NVlabs / DIODE

Official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021.
https://openaccess.thecvf.com/content/WACV2021/html/Chawla_Data-Free_Knowledge_Distillation_for_Object_Detection_WACV_2021_paper.html
Other
61 stars 6 forks source link

Training time consumption #18

Open DCNSW opened 2 years ago

DCNSW commented 2 years ago

As the comments in LINE_looped_runner_yolo.sh show, the authors use 28 gpus to generate a dataset in 48 hours.

Can you provide the detailed running time of a) generate 160x160 images, b) upsample images from 160x160 to 320x320, c) fine-tune 320x320 images, d) knowledge distillation.

Thank you. @akshaychawla

hongge831 commented 2 years ago

I also have the same wondering about the time.