AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.68k stars 7.96k forks source link

Loss converges quickly but AP is low #1081

Open A-levy opened 6 years ago

A-levy commented 6 years ago

After I change the anchors ,loss converges quickly but AP is low. The command: ./darknet detector calc_anchors data/mydata.data -num_of_clusters 6 -width 416 -height 416

The avg-loss picture: avg_loss

There is part of my cfgfile(yoloV3 tiny): batch=64 subdivisions=8 width=416 height=416 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1

learning_rate=0.001 burn_in=1000 max_batches = 55000 policy=steps steps=100,5000,40000 scales=10,.1,.1

At first,I use the default anchors,get 81 AP and about 0.5 loss.But now ,I only get 40 AP and about 0.7 loss.I don't kown why.So what should I do so that I can get higher AP an lower loss?

A-levy commented 6 years ago

My dataset has 6 classes but I only train one of them.So when I recalculate the anchors,I calculate all of them.Does it matter? If it is the reson,how can I just recalculate one class's anchors?

AlexeyAB commented 6 years ago

At first,I use the default anchors,get 81 AP and about 0.5 loss.But now ,I only get 40 AP and about 0.7 loss.I don't kown why.So what should I do so that I can get higher AP an lower loss?

For how many classes did you train to get 81 AP?

A-levy commented 6 years ago

Thank you for your reply and there is my results. After change the anchors: smielab@smielab-Z97X-UD3H:~/levy/darknet/voc_eval-master$ python 123.py /home/smielab/levy/darknet/results/1car.txt /home/smielab/levy/infrared_val.txt car Reading annotation for 1/4000 Reading annotation for 101/4000 Reading annotation for 201/4000 Reading annotation for 301/4000 Reading annotation for 401/4000 Reading annotation for 501/4000 Reading annotation for 601/4000 Reading annotation for 701/4000 Reading annotation for 801/4000 Reading annotation for 901/4000 Reading annotation for 1001/4000 Reading annotation for 1101/4000 Reading annotation for 1201/4000 Reading annotation for 1301/4000 Reading annotation for 1401/4000 Reading annotation for 1501/4000 Reading annotation for 1601/4000 Reading annotation for 1701/4000 Reading annotation for 1801/4000 Reading annotation for 1901/4000 Reading annotation for 2001/4000 Reading annotation for 2101/4000 Reading annotation for 2201/4000 Reading annotation for 2301/4000 Reading annotation for 2401/4000 Reading annotation for 2501/4000 Reading annotation for 2601/4000 Reading annotation for 2701/4000 Reading annotation for 2801/4000 Reading annotation for 2901/4000 Reading annotation for 3001/4000 Reading annotation for 3101/4000 Reading annotation for 3201/4000 Reading annotation for 3301/4000 Reading annotation for 3401/4000 Reading annotation for 3501/4000 Reading annotation for 3601/4000 Reading annotation for 3701/4000 Reading annotation for 3801/4000 Reading annotation for 3901/4000 0.4081862715579623

Use the default anchors: smielab@smielab-Z97X-UD3H:~/levy/darknet/voc_eval-master$ python 123.py /home/smielab/levy/darknet/results/tinyV2resultcar.txt /home/smielab/levy/infrared_val.txt car Reading annotation for 1/4000 Reading annotation for 101/4000 Reading annotation for 201/4000 Reading annotation for 301/4000 Reading annotation for 401/4000 Reading annotation for 501/4000 Reading annotation for 601/4000 Reading annotation for 701/4000 Reading annotation for 801/4000 Reading annotation for 901/4000 Reading annotation for 1001/4000 Reading annotation for 1101/4000 Reading annotation for 1201/4000 Reading annotation for 1301/4000 Reading annotation for 1401/4000 Reading annotation for 1501/4000 Reading annotation for 1601/4000 Reading annotation for 1701/4000 Reading annotation for 1801/4000 Reading annotation for 1901/4000 Reading annotation for 2001/4000 Reading annotation for 2101/4000 Reading annotation for 2201/4000 Reading annotation for 2301/4000 Reading annotation for 2401/4000 Reading annotation for 2501/4000 Reading annotation for 2601/4000 Reading annotation for 2701/4000 Reading annotation for 2801/4000 Reading annotation for 2901/4000 Reading annotation for 3001/4000 Reading annotation for 3101/4000 Reading annotation for 3201/4000 Reading annotation for 3301/4000 Reading annotation for 3401/4000 Reading annotation for 3501/4000 Reading annotation for 3601/4000 Reading annotation for 3701/4000 Reading annotation for 3801/4000 Reading annotation for 3901/4000 0.810911663715933

I don't change the mask,mask = 1,2,3.By the way,can you tell me What is the role of mask?Thank you!

AlexeyAB commented 6 years ago

mask= is the inexes of anchors that will be used for this layer. use mask=0,1,2 in the last [yolo] layer for training your custom objects.

Also, can you show output of command for 80 AP and 40 AP: ./darknet detector map data/mydata.data yolo-obj.cfg backup\yolo-obj_50000.weights

A-levy commented 6 years ago

Use the default anchors:

detections_count = 30440, unique_truth_count = 11804
class_id = 0, name = car, 0440 ap = 76.61 % for thresh = 0.25, precision = 0.86, recall = 0.71, F1-score = 0.78 for thresh = 0.25, TP = 8348, FP = 1342, FN = 3456, average IoU = 66.00 %

mean average precision (mAP) = 0.766059, or 76.61 % Total Detection Time: 29.000000 Seconds

Use the recalculate anchors:

detections_count = 18953, unique_truth_count = 11804
class_id = 0, name = car, 8953 ap = 50.63 % for thresh = 0.25, precision = 0.90, recall = 0.39, F1-score = 0.55 for thresh = 0.25, TP = 4625, FP = 497, FN = 7179, average IoU = 68.79 %

mean average precision (mAP) = 0.506321, or 50.63 % Total Detection Time: 27.000000 Seconds

AlexeyAB commented 6 years ago

@A-levy Yes, this is strange. Did you train the same number of iterations in both cases? Try to train with new anchors and with masks=0,1,2 in the last layer. If it doesn't help, then use default anchors.

A-levy commented 6 years ago

@AlexeyAB OK,I will try it later.Thank you!!!By the way,do you have an online target detection using an industrial camera?Like pointgery camera.

AlexeyAB commented 6 years ago

@A-levy I used 2Mpx HikVision DS-2CD4025FWD-AP - RTSP - network camera. But it has significant latency even when I used Gstreamer.

A-levy commented 6 years ago

@AlexeyAB Sorry,I have no idea about your problem . I have never used the camera before .Now I use the default-anchors-model on NVIDIA-TX2. When I test the video(1280*720),it has about 24FPS.But when I use the pointgery camera to do online detection,only get about 5FPS.......Maybe something is wrong with my code,this problem is bothering me now.

AlexeyAB commented 6 years ago

@A-levy

Try to build Darknet with LIBSO=1 in the Makefile and test performance of your camera by using such command: ./uselib cfg/coco.names yolov3.cfg yolov3.weights <network path to your camera>

A-levy commented 6 years ago

@AlexeyAB

  1. I have tested my camera,It can get about 50 FPS(1280*720).

  2. Yes,it is.

  3. It seems that my camera is different.At first ,I used the command ./darknet detector demo <....> to open my camera and detect.But it does not work.Because you can not start this camera if you just use opencv.I open this camera by using its SDK and samples code.Now I have used the command ./uselib cfg/coco.names yolov3.cfg yolov3.weights <169.254.60.20>.It show me that "Video-steam stoped!" and I can see nothing.

Now I have modify my camera's sample code and my camera can grab image and save it.So I think what I need to do is taking the image as input and use darknet to detect it. Then I finish my code and just get about 5FPS....Can you give me some advice?Like which function I can use in the darknet.Thank you!

AlexeyAB commented 6 years ago

@A-levy

I open this camera by using its SDK and samples code.

So I think the problem in your code.

Now I have modify my camera's sample code and my camera can grab image and save it.So I think what I need to do is taking the image as input and use darknet to detect it. Then I finish my code and just get about 5FPS....Can you give me some advice?Like which function I can use in the darknet.Thank you!

Do you save these images to the jpeg files?

Look at these parts of code:

A-levy commented 6 years ago

@AlexeyAB I save those images to the bmp files. Thank you very much!!! I will try it.