mbuckler / youtube-bb

Public repo for helpful scripts when using the YouTube Bounding Boxes dataset
MIT License
193 stars 57 forks source link

YouTube BoundingBox

This repo contains helpful scripts for using the YouTube BoundingBoxes dataset released by Google Research. The only current hosting method provided for the dataset is annotations in csv form. The csv files contain links to the videos on YouTube, but it's up to you to download the video files themselves. For this reason, these scripts are provided for downloading, cutting, and decoding the videos into a usable form.

These scripts were written by Mark Buckler and the YouTube BoundingBoxes dataset was created and curated by Esteban Real, Jonathon Shlens, Stefano Mazzocchi, Xin Pan, and Vincent Vanhoucke. The dataset web page is here and the accompanying whitepaper is here.

Installing the dependencies

  1. Clone this repository.

  2. Install majority of dependencies by running pip install -r requirements.txt in this repo's directory.

  3. Install wget, ffmpeg and youtube-dl through your package manager. For most platforms this should be straightforward, but for Ubuntu 14.04 users you will need to update your apt-get repository before being able to install ffmpeg as shown here.

Some small tweaks may be needed for different software environments. These scripts were developed and tested on Ubuntu 14.04.

Running the scripts

Note: You will need to use at least Python 3.0. This script was developed with Python 3.5.2.

Download

The download.py script is provided for the annoted videos. It also cuts these videos down to the range in which they have been annotated. Parallel video downloads are supported so that you can saturate your download bandwith even though YouTube throttles per-video. Because video clips are cut with FFmpeg re-encoding (see here for why) the bottleneck is compute speed rather than download speed. For this reason, set the number of threads to the number of cores on your machine for best results.

python3 download.py [VID_DIR] [NUM_THREADS]

Object Detection Decoder & VOC 2007 Converter

For the detection task, a script for decoding frames and converting the CSV annotations into the VOC 2007 XML format is provided. For documentatation about the original VOC 2007 development kit and format see here. If you are interested in training Faster RCNN on this dataset, see here for my updates to the PyCaffe implementation of Faster RCNN.

python3 voc_convert.py [VID_DIR] [DSET_DEST] [NUM_THREADS] [NUM_TRAIN] [NUM_VAL] [MAX_RATIO] [INCL_ABS]

Classification Decoder

A decoding script has also been provided for the classification task. Usage is similar to the object detection decoder. Decoded frames are sorted into directories according to class.

python3 class_decode.py [VID_DIR] [FRAME_DEST] [NUM_THREADS] [NUM_TRAIN] [NUM_VAL] [MAX_RATIO] [INCL_ABS]