Open Philharmy-Wang opened 3 years ago
I use ./darknet/scripts/get_coco2017.sh
and ./darknet/scripts/get_coco_dataset.sh
to get the coco dataset
It seems that the new commit has this bug i have faced the same problem and solved it by checkout an old commit. just checkout this commit 8c9c5171891ea92b0cbf5c7fddf935df0b854540 It will work.
It seems that the new commit has this bug i have faced the same problem and solved it by checkout an old commit. just checkout this commit 8c9c517 It will work.
Ok ! I will try ~.~ Thank you very much!!
@Philharmy-Wang I am facing a similar issue. Did changing to old commit resolve the issue?
I used yolov4 to train coco2017 data sets on google cloud platform. After about 400 steps, the loss value is about 31. After about 900 steps, the loss value begins to rise, and finally the loss value changes to-nan.
The gpu I use is Tesla v100. Train environment :
ubuntu18.04, cudnn7.6.5, cudnn7.6.5, cuda10.2,opencv3.4.4.
The training command I use are./darknet detector train cfg/coco.data cfg/yolov4.cfg -map
. I used coco2017 & coco2014 dataset for training, the loss is-nan
, and I without get the mAP value. This is the Makefile I used:This is the cfg file I used:
This is the chart.png: