Cartucho / OpenLabeling

Label images and video for Computer Vision applications
Apache License 2.0
926 stars 266 forks source link
bounding-boxes darkflow darknet gui labeling-tool object-detection opencv pascal-voc training-yolo yolo

OpenLabeling: open-source image and video labeler

GitHub stars

Image labeling in multiple annotation formats:

Citation

This project was developed for the following paper, please consider citing it:

@INPROCEEDINGS{8594067,
  author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},
  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots}, 
  year={2018},
  pages={2336-2341},
}

Latest Features

Table of contents

Quick start

To start using the YOLO Bounding Box Tool you need to download the latest release or clone the repo:

git clone --recurse-submodules git@github.com:Cartucho/OpenLabeling.git

Prerequisites

You need to install:

Alternatively, you can install everything at once by simply running:

python -mpip install -U pip
python -mpip install -U -r requirements.txt

Run project

Step by step:

  1. Open the main/ directory
  2. Insert the input images and videos in the folder input/
  3. Insert the classes in the file class_list.txt (one class name per line)
  4. Run the code:
  5. You can find the annotations in the folder output/

     python main.py [-h] [-i] [-o] [-t] [--tracker TRACKER_TYPE] [-n N_FRAMES]
    
     optional arguments:
      -h, --help                Show this help message and exit
      -i, --input               Path to images and videos input folder | Default: input/
      -o, --output              Path to output folder (if using the PASCAL VOC format it's important to set this path correctly) | Default: output/
      -t, --thickness           Bounding box and cross line thickness (int) | Default: -t 1
      --tracker tracker_type    tracker_type being used: ['CSRT', 'KCF','MOSSE', 'MIL', 'BOOSTING', 'MEDIANFLOW', 'TLD', 'GOTURN', 'DASIAMRPN']
      -n N_FRAMES               number of frames to track object for

    To use DASIAMRPN Tracker:

  6. Install the DaSiamRPN submodule and download the model (VOT) from google drive
  7. copy it into 'DaSiamRPN/code/'
  8. set default tracker in main.py or run it with --tracker DASIAMRPN

How to use the deep learning feature

Download the pre-trained model by clicking this link http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz and put it into object_detection/models. Create the models folder if necessary. Make sure to extract the model.

Note: Default model used in main_auto.py is ssdlite_mobilenet_v2_coco_2018_05_09. We can set graph_model_path in file main_auto.py to change the pretrain model

GUI usage

Keyboard, press:

Key Description
a/d previous/next image
s/w previous/next class
e edges
h help
q quit

Video:

Key Description
p predict the next frames' labels

Mouse:

Authors