PINTO0309 / OpenVINO-YoloV3

YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
https://qiita.com/PINTO
Apache License 2.0
537 stars 165 forks source link

improvements with openvino_tiny-yolov3_test.py #34

Open naufil601 opened 5 years ago

naufil601 commented 5 years ago

Hi,

Thanks for the python script to increase test accuracy for yolov3-tiny. I trained yolov3-tiny from darknet, on my own dataset, hence changed class labels according to my data. Converted the model into (.pb) file. Further converted this (.pb) model to IR and Bin files using OpenVino toolkit.

I'm using your python script (_openvino_tiny-yolov3test.py) to preprocess and postprocess my detections from Movidius (Intel's Compute Stick). I have changed the labels as per my need. The problem is that I'm getting some False Positives in the result. Can you please guide me what kind of tweaks I can make to your script so that it may adapt to my testing environment?

Thanks for help.

PINTO0309 commented 5 years ago

Please upgrade the version of OpenVINO to 2019 R1. #33

naufil601 commented 5 years ago

@PINTO0309 Thanks for quick reply. I believe there must be some issue with Myriad plugin of 2018 version.

But if I convert the model for CPU, it even then gives some False Positives with very high confidence. And when I test the same video with darknet, it gives no such False Positives.

Can you suggest some possible reasons for it ?

PINTO0309 commented 5 years ago

I don't know exactly what you generated .pb, .bin, .xml, so I can't answer exactly.

  1. Lack of training epoch
  2. Incorrect definition of .cfg
  3. Model conversion forgot the "--tiny" option
  4. BGR to RGB, or RGB to BGR
  5. mean value mistake
  6. normalization mistake
derek-zr commented 5 years ago

Same result as you. I use 2019 R1 version, but still some false positives. See https://github.com/PINTO0309/OpenVINO-YoloV3/issues/32

naufil601 commented 5 years ago

@derek-zr did you find any way out ?

derek-zr commented 5 years ago

@derek-zr did you find any way out ?

Still trying. I test the pb model, the result is good. But the IR model have many false positives. So i think the reason is the bin and xml conversion.

naufil601 commented 5 years ago

Yes. Same results here.

derek-zr commented 5 years ago

Yes. Same results here.

Seems I find the reasons. I try cpp script with coco weights and the result is pretty good. So I guess there's some problems when we modify the test.py. So if i want to use the local video with a more high resolution, what should I modify the preprocess code? # @PINTO0309 Then I find that even though i use cpp version with my own model ,there are some false positives, i guess it shows that the original weights should retrain more epoch and be more accurate.

derek-zr commented 5 years ago

After some experiments, I find some reasons. First, for image aspect ratio that are not 1:1, the draw location calculate may be wrong, so there will be some boxes which are displaced. Second, I wonder that if the preprocess to keep the aspect ration is necessary. Because I find that there are still a lot of false positives in my own model and the coco model is not so accurate.

PINTO0309 commented 5 years ago

I recognize that there is an aspect rate bug. Please modify the cpp program referring to the Python program.

derek-zr commented 5 years ago

I recognize that there is an aspect rate bug. Please modify the cpp program referring to the Python program.

Thanks for your reply. I also find that the cpp version don't have the same preprocess. But i find that even if i use the preprocess in python, there is still a lot of false positives in my own model.

derek-zr commented 5 years ago

After a longer training process ,the new model still performs bad. And pb model performs great, but IR model produce many false positives.

naufil601 commented 5 years ago

@derek-zr I converged my network to 0.5 loss and even then getting FP with this python script.

derek-zr commented 5 years ago

yeah. The loss of mine is even lower but still some FP and the detection results is bad. I think it's a bug in the intel conversion python code.

derek-zr commented 5 years ago

Have you solved it? One author from the intel forum says that it may be the bug in the logistic layer code. https://software.intel.com/en-us/forums/computer-vision/topic/808504#comment-1938506 I try to change the code,but still bad results.

ybloch commented 4 years ago

I have the same issue, bad results with IR, good results with TF... Did someone find a solution for this?