Open prabh27 opened 3 years ago
all of training use batch size 64. https://github.com/WongKinYiu/ScaledYOLOv4/blob/yolov4-large/train.py#L76
we use 1080ti/2080ti/titanx/titanrtx/v100 gpus for training. yolov4-large models are use 4 or 8 v100 gpus for training to save training time.
Do you have an estimate on the training time it took for yolov4-large on 2080ti or v100 gpus?
On Tue, Dec 15, 2020 at 6:30 PM Kin-Yiu, Wong notifications@github.com wrote:
1.
all of training use batch size 64.
https://github.com/WongKinYiu/ScaledYOLOv4/blob/yolov4-large/train.py#L76 2.
we use 1080ti/2080ti/titanx/titanrtx/v100 gpus for training. yolov4-large models are use 4 or 8 v100 gpus for training to save training time.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/WongKinYiu/ScaledYOLOv4/issues/91#issuecomment-745230550, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTWKSIYIWWTQE4XJNC4QJLSU5CHBANCNFSM4UZK6VNA .
-- Prabh Simran Singh Baweja Graduate Student, CMU Mobile: +66-82-437-3136
Hi,
Thanks a lot for the wonderful paper and the github repo. Also, congrats for getting the highest accuracy.
In the paper, you mentioned: "The time used for training YOLOv4-tiny is 600 epochs, and that used for training YOLOv4-CSP is 300 epochs. As for YOLOv4-large, we execute 300 epochs first and then followed by using stronger data augmentation method to train 150 epochs.".
Based on the info mentioned in the paper, I am assuming the training is performed on MSCOCO 2017 training dataset. I'd like to ask a few questions: 1) What is the batch size used? 2) What type of GPUs did you use for training and how many?
Thanks