Open Hidayath-Shaik opened 5 years ago
@HidayathullaShaik Hi,
You should get very good result on these images without any addition changes after 4000 iteration. Only if your training dataset is correct.
I have referred "train.txt" for both "valid" and "train" parameters in "obj.data" and trained for 1500 epochs. the mAP was 61%
Did you train 1500 epochs=iteration/number_of_images
, or 1500 iterations?
What commands did you use for Training, mAp and Detection?
In 99% of such problems - the training dataset is incorrect. Check your dataset by using Yolo_mark (supported Linux and Windows): https://github.com/AlexeyAB/Yolo_mark
cmake .
make
./linux_mark.sh
Just open your dataset in the Yolo_mark and go through all images by press SPACE button, will you see correct boxes on all images?
https://github.com/AlexeyAB/darknet#when-should-i-stop-training
Usually sufficient 2000 iterations for each class(object), but not less than 4000 iterations in total.
- Did you train 1500 epochs=
iteration/number_of_images
, or 1500 iterations?
- What commands did you use for Training, mAp and Detection?
- In 99% of such problems - the training dataset is incorrect. Check your dataset by using Yolo_mark (supported Linux and Windows): https://github.com/AlexeyAB/Yolo_mark
Also, when I detect the same model via openCV i.e. cv.dnn.readNet(yolo-weights, yolo-config) method it is not at all predicting even if I set the threshold to "0". what could be wrong?
if I start training and stop after some iterations. Then if I start training with the last saved weights file, would yolo make a note of images used for training in last iteration from train.txt or would it train from the first image of train.txt?
Thanks, Hidayath
I have trained for 1500 iterations.
This is too few iterations. You should train at least 4000 iterations.
Detection Command: ./darknet detect "yolov3_weapon.cfg" "backup/yolov3_weapon_last.weights" "testimages/G1.jpg"
This is incorrect command. It is suitable only for default coco-model.
Use this: https://github.com/AlexeyAB/darknet#how-to-use-on-the-command-line
./darknet detector test "data/weapon.data" "yolov3_weapon.cfg" "backup/yolov3_weapon_last.weights" "testimages/G1.jpg"
There are no bad labels when checked in bad.list but will check through yolo_mark.
There only obvious errors in the bad.list and bad_label.list files. You must check your dataset by using Yolo_mark.
Also, when I detect the same model via openCV i.e. cv.dnn.readNet(yolo-weights, yolo-config) method it is not at all predicting even if I set the threshold to "0". what could be wrong?
There may be many reasons: badly trained model, incorrect filenames, ...
A quick question: if I start training and stop it after some iterations, a weights file will be created. Then if I start training with the last saved weights file, would yolo make a note of images used for training in last iteration from train.txt or would it train from the first image of train.txt?
Hi Alexey, Is it okay to give complete image dataset for valid in obj.data file? i.e. same image dataset for training and validation.
Thanks, Hidayath
A quick question: if I start training and stop it after some iterations, a weights file will be created. Then if I start training with the last saved weights file, would yolo make a note of images used for training in last iteration from train.txt or would it train from the first image of train.txt?
Yolo will not remember what images are used. And it doesn't matter, since Yolo read images randomly.
Is it okay to give complete image dataset for valid in obj.data file? i.e. same image dataset for training and validation.
It is ok. But the best practice is to use 80% for training dataset, 10% for valid dataset, 10% for test dataset. Or 90% for training dataset, 10% for valid dataset (valid and test datasets are the same).
Hi Alexey, I am performing yolo training for gun, knife. All the dataset images are of different sizes downloaded from google. Labelling for images is done using LabelImg. gun dataset count: 4705 knife dataset count: 2277 I have referred "train.txt" for both "valid" and "train" parameters in "obj.data" and trained for 1500 epochs. the mAP was 61% but when I test the prediction via "./darknet detect" command, object is displaying in full frame as shown below: In the below case, yolo is predicting wrongly:
Below is my cfg file: yolov3_weapon.txt
Questions:
I am definitely going to train for more epochs but worried with the current accuracy. It would be great if you can shed some light to fix these issues.
Thanks, Hidayath