Closed Wazaki-Ou closed 4 years ago
Thank you for the link @AlexeyAB . I have already checked it before, but I still cannot identify the issue. What do you think about the 2 points I mentioned in my question (1 and 2). Could they be the cause? And is it normal that Yolov3 in the updated repository behaves like Yolov4 in terms of detection speed and no-detection behavior? Shouldn't it behave like Yolov3 from the previous release? Thanks a lot!!
Read: https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
@AlexeyAB Thanks again for your reply. I have already followed the instructions of that part as well, but the problem was not solved. I will try to work on the dataset again to make sure all objects present are labelled (the cases I mentioned in question (1)) Then I will try again with the latest Darknet and let you know in a couple of days if the issue is solved or not.
show mAP on training dataset show mAP on validation dataset
@AlexeyAB There you go: For training set:
For Validation set:
Show examples of bad detection. And show simial training image with bounded boxes.
if you get high mAP for both Training and Validation datasets, but the network detects objects poorly in real life, then your training dataset is not representative - add more images from real life to it
for each object which you want to detect - there must be at least 1 similar object in the Training dataset with about the same: shape, side of object, relative size, angle of rotation, tilt, illumination. So desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train 2000*classes iterations or more
@AlexeyAB Thank you so much for all the quick replies. I don't have access to the computer I am using for training now, but I will get you some examples tomorrow when I go to school. For the data, I am sure it is representative since all the data used for training, validating and for checking the detection at the end was collected in the same environment with the same subjects. However, I will also work on the data for a couple days to make sure it's all well labelled (especially the pictures containing more than 1 object ). Again, thank you for all your help so far. I'll post again tomorrow!!
So sorry for the long wait. I went through all the data, reviewed my script (resolved a small bug) and added multi-labels in the pictures that contained more than one object. I also downloaded again the latest repository just in case, and to make it short, my issue seems to be solved. It took me a long time to add the extra labels and double check everything, and the training took some time. Then I wanted to test with different scenarios to be sure. All the issues I previously had are gone. What I noticed:
I am sure refining the dataset helped solved part of the issue, but YOLOv4 definitely outperforms YOLOv3.
I have been doing some training on my custom dataset (humans and dog in the same room) for a while using Yolov3 before the repository gets updated, but I had no-detections issues in cases where both humans and a dog were in the same room (mainly detects human only)
I tried with YOLOv4 hoping it would solve the issue, but I was surprised to notice that Yolov4 failed to even detect humans when they are alone, something the previous model has no problem doing. Even for the dog, the detection of the old model seems better in the same data.
In order to improve the detection, I tried "adding negative data" of the room empty with some different setups, changing the size to 608*608, but it still did not solve this issue.
Even more surprisingly, training a Yolov3 with the latest update seemed to have the same problems that Yolov4 has. Both of them are much slower when detecting and both fail to detect humans. I was expecting at least Yolov3 from this update to be similar to the one from the previous release, but it behaves more like Yolov4.
If there is no way to improve detection with this updated release, I am willing to still use the previous one (I have both built and working), but then I would really appreciate if I can get help with improving detection of both human and dog when in the same room (especially when they are a little close). In order to do that, I have few concerns/ideas of what could be wrong:
1- In many of the train/valid data, I have pictures with both classes (human & dog), but the labeling is only done for one of them. let's say I have 2 humans and 1 dog in a picture. The label would be only about one human. In some cases, I would reuse the picture with a different name and different label, let's say for dog. Would this be an issue ?? I was thinking that maybe the model when learning on these pictures would at some point detect all subjects, but then finds the label of one only and hence learn to ignore the other subjects. is it possible to have multi-labeling for the same picture or should I just avoid using pictures with more than one object to label?
2- Many pictures of the negative data are duplicates since I am retrieving data from videos and it's hard to change the room set up many times. When I say duplicates, I mean a lot of duplicates sometimes around 500 or more images are all the same image actually. I don't think this would affect negatively, but just in case, I am mentioning that.
I'm sorry for the long message. I just wanted to put as much information as possible to help understand my issue. I would really appreciate any help I can get here.
Thanks a lot!!
If you have an issue with training - no-detections / Nan avg-loss / low accuracy:
what command do you use? darknet.exe detector train data/obj.data yolov3.cfg darknet53.conv.74 (I also used cfg and weights for yolov4)
what dataset do you use?
custom dataset including dogs and humans taken from videos recorded in the same room
what Loss and mAP did you get? Loss goes down to 0.39 and mAP up to 98% in YoloV4 training and in Yolov3 Loss 0.097 and mAP 98%
show chart.png with Loss and mAP
check your dataset - run training with flag
-show_imgs
i.e../darknet detector train ... -show_imgs
and look at theaug_...jpg
images, do you see correct truth bounded boxes? I checked the dataset and there is no issue with it. I labelled with a different program but the script I wrote to make it into Yolo format works well as I checked it many times (even with the darknet command)rename your cfg-file to txt-file and drag-n-drop (attach) to your message here yolo-objv4.txt
show content of generated files
bad.list
andbad_label.list
if they exist Only 1 image and its corresponding label file. But that's a bit weird because I removed both files from the dataset and from the train.txt file, so it shouldn't have appeared.Read
How to train (to detect your custom objects)
andHow to improve object detection
in the Readme: https://github.com/AlexeyAB/darknet/blob/master/README.md I did and even tried by adding negative data and tried one training with 608*608 but still the same results.show such screenshot with info