Closed vongracia closed 2 years ago
please follow below part 3.4: Compare Accuracy Between Floating Point and Quantized Models (Optional) in https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/07-yolov4-tutorial
@lishixlnx thanks
But the evaluation described in 3.4 is when you have converted to caffe, isn't it?
I have instead done the conversion to tensorflow described in point 2.
Is it still aplicable? Thanks
For the darknet part, it seems this file (contained in the git above, under directory '/scripts') needs to be used: tf_eval_yolov4_coco_2017.py
But this is coded for the coco dataset. As I am using my custom dataset: could anyone give me some glints on how to modify this python file to evaluate my float and quantized models?
Thank you!!!
I think you can refer to the implemntation of pycocotools.cocoeval() logic. the 2 input should be your GT and your DetectResult.
Closing since no activity for more than 3 weeks, please reopen if you still have question, thanks
Hi folks,
I've followed this tutorial to quantize a darknet yolov4 model, and then compile and deploy into a ZCU102 for inference. https://github.com/Xilinx/Vitis-AI-Tutorials/tree/1.4/Design_Tutorials/07-yolov4-tutorial
I can make inference properly. My intention now is to evaluate the float model (GPU) vs the quantized that was deployed on the board.
Can you provide me some links on how to do so? I've followed the
I need to analyze the difference in accuracy when doing object detection for both models, as well as the FPS in video for detection.
In the tutorial above the examples are for coco dataset, but I do not know how to do that for my custom data. Thank you Antonio