This repo contains helpful scripts for using the YouTube BoundingBoxes dataset released by Google Research. The only current hosting method provided for the dataset is annotations in csv form. The csv files contain links to the videos on YouTube, but it's up to you to download the video files themselves. For this reason, these scripts are provided for downloading, cutting, and decoding the videos into a usable form.
These scripts were written by Mark Buckler and the YouTube BoundingBoxes dataset was created and curated by Esteban Real, Jonathon Shlens, Stefano Mazzocchi, Xin Pan, and Vincent Vanhoucke. The dataset web page is here and the accompanying whitepaper is here.
Clone this repository.
Install majority of dependencies by running
pip install -r requirements.txt
in this repo's directory.
Install wget, ffmpeg and youtube-dl through your package manager. For most platforms this should be straightforward, but for Ubuntu 14.04 users you will need to update your apt-get repository before being able to install ffmpeg as shown here.
Some small tweaks may be needed for different software environments. These scripts were developed and tested on Ubuntu 14.04.
Note: You will need to use at least Python 3.0. This script was developed with Python 3.5.2.
The download.py
script is provided for the annoted videos. It also
cuts these videos down to the range in which they have been
annotated. Parallel video downloads are supported so that you can
saturate your download bandwith even though YouTube throttles per-video. Because
video clips are cut with FFmpeg re-encoding (see here for
why) the bottleneck is
compute speed rather than download speed. For this reason, set the number of
threads to the number of cores on your machine for best results.
python3 download.py [VID_DIR] [NUM_THREADS]
[VID_DIR]
Directory to download videos into[NUM_THREADS
Number of threads to use for downloading and cuttingFor the detection task, a script for decoding frames and converting the CSV annotations into the VOC 2007 XML format is provided. For documentatation about the original VOC 2007 development kit and format see here. If you are interested in training Faster RCNN on this dataset, see here for my updates to the PyCaffe implementation of Faster RCNN.
python3 voc_convert.py [VID_DIR] [DSET_DEST] [NUM_THREADS] [NUM_TRAIN] [NUM_VAL] [MAX_RATIO] [INCL_ABS]
[VID_DIR]
The source directory where you downloaded videos into[DSET_DEST]
The destination directory for the converted dataset[NUM_THREADS]
The number of threads to use for frame decoding[NUM_TRAIN]
The number of training images to decode. Use 0 to decode all
annotated frames[NUM_VAL]
The number of validation images to decode. Use 0 to decode all
annotated frames[MAX_RATIO]
The maximum aspect ratio allowed. If the value is set to 0 then
all frames will be decoded. Otherwise all frames with aspect ratios greater
than the maximum will be deleted and not included in xml annotations.[INCL_ABS]
Flag to include (1) or not include (0) frames in which the object
of interest is absent.A decoding script has also been provided for the classification task. Usage is similar to the object detection decoder. Decoded frames are sorted into directories according to class.
python3 class_decode.py [VID_DIR] [FRAME_DEST] [NUM_THREADS] [NUM_TRAIN] [NUM_VAL] [MAX_RATIO] [INCL_ABS]
[VID_DIR]
The source directory where you downloaded videos into[FRAME_DEST]
The top level directory where class folders containing frames will be[NUM_THREADS]
The number of threads to use for frame decoding[NUM_TRAIN]
The number of training images to decode. Use 0 to decode all
annotated frames[NUM_VAL]
The number of validation images to decode. Use 0 to decode all
annotated frames[MAX_RATIO]
The maximum aspect ratio allowed. If the value is set to 0 then
all frames will be decoded. Otherwise all frames with aspect ratios greater
than the maximum will be deleted and not included in xml annotations.[INCL_ABS]
Flag to include (1) or not include (0) frames in which the object
of interest is absent.