Open Robert-TW opened 4 years ago
is my question a normal situation? I have about 120,000 training dataset. Could it be that there's too much dataset and that's why it takes more time to train? Is it because I don't have enough iteration? @AlexeyAB Can you tell me how long did you spend training COCO dataset?and how many V100 GPUs are you used? thanks.
batch=64 subdivisions=8 width=512 height=512
for about 3 weeks on 1 x V100 32GB for 80 classes MSCOCO-train.
Hello @AlexeyAB , Previously I was successful training YOLOV3 with seven classes of COCO data(car,motorcycle,bus,truck,bicycle,people,traffic light), but the LOSS value does not drop when training YOLOV4.The loss value has stayed around 4.
I trained with two GPUs V100, tried width=416 height=416 and width=608 height=608, but the situation was the same.
The command I use in training: ./darknet detector train cfg/coco.data cfg/yolov4-custom.cfg yolov4.conv.137 -map -gpus 0,1
yolov4-custom.cfg.txt