AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.75k stars 7.96k forks source link

Acknowledgement #2502

Open anandkoirala opened 5 years ago

anandkoirala commented 5 years ago

Hi AlexeyAB, After a while i came up with a publication and you have been acknowledged. Sorry to post this as an issue because i don't have your contact email. This is the link https://doi.org/10.1007/s11119-019-09642-0 If you wish to view in full please send me an email at a.koirala@cqu.edu.au and i will share the full link. Regards, Anand

drapado commented 5 years ago

Hi, I just read the paper, very interesting work! It's nice to see other jobs in the agricultural sector.

Would it be possible to have the cfg and pre-trained weights of the mangoYOLO model. I understand if they are not shareable. Thanks!

anandkoirala commented 5 years ago

Hi @drapado Thanks for your interest and compliments. I can share you the full link if you wish but i feel sorry to tell the cfg and pre-trained weights are not shareable at this moment. The architecture is detailed with block diagram and explanation in the publication. Moreover, the annotated image dataset is released in public (link in the paper). I am always open for collaborations and discussions. Regards, Anand

AlexeyAB commented 5 years ago

@anandkoirala Hi, Thanks for article!

It is very interesting that you achived Average precisions on MangoYOLO(s/pt)-512 higher than on YOLOv3-512 full model (table 4).

Did you use your own script for Average Precision calculation on different models Yolo/FasterRCNN/...?

anandkoirala commented 5 years ago

@AlexeyAB For a fair performance comparison of different architectures/models a common script was used. As detailed in section 'Training with fruit images', the script is 'voc_eval_py3.py' from the official 'pyfasterRCNN' github repo for faster rcnn. This script provides option to choose from two different methods for approximating the area under the precision-recall curve (AP) as used by pascal voc-2007 or voc-2012. The more accurate method is voc-2012 and this was used in my paper. I have not gone into the code used by YOLO but i have noticed YOLO returns relatively less AP scores as compared to using the fasterRCNN script. Regards, Anand

anandkoirala commented 5 years ago

@AlexeyAB In this application higher AP score was reported for MangoYOLO compared to full YOLOv3. In my recent Grad-CAM visualization experiment it was observed that the intermediate detection layer was more responsible for detecting the fruit class which still indicates there is some room to further modify the architecture without significant decrease in detection performance. However the architecture optimization is specific to the application and the nature of dataset or the class. You know better..