Simple implementation of yolo v1 and yolo v2 by TensorFlow
Paper yolo v1: You Only Look Once: Unified, Real-Time Object Detection
Paper yolo v2: YOLO9000: Better, Faster, Stronger
The code of yolo v2, we use 9 anchors which is calculated by k-means on COCO dataset.
data augmentation | pretrained vgg16 | pretrained darknet |
---|---|---|
:x: | :heavy_check_mark: | :x: |
==============
Pretrained VGG16: Google Drive: https://drive.google.com/open?id=1LTptCY96ABAUlJHUJq6MhqNrDQN7JfQP
Dataset: Pascal voc 2007: https://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
==============
[1]. Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[2]. Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7263-7271.