PyTorch implementation of "Learning to Track at 100 FPS with Deep Regression Networks"
Report Bug
·
Request Feature
This repository contains the reimplementation of GOTURN in PyTorch. If you are interested in the following, you should consider using this repository
Understand different moving parts of the GOTURN algorithm, independently through code.
Plug and play with different parts of the pipeline such as data, network, optimizers, cost functions.
Built with following frameworks:
Similar to original implementation by the author, you will be able to train the model in a day
Face | Surfer | Face |
---|---|---|
Bike | Bear |
---|---|
# Clone the repository
$ git clone https://github.com/nrupatunga/goturn-pytorch
# install all the required repositories
$ cd goturn-pytorch/src
$ pip install -r requirements.txt
# Add current directory to environment
$ source settings.sh
cd goturn-pytorch/src/scripts
$ ./download_data.sh /path/to/data/directory
If you want to use your own custom dataset to train, you might want to understand the how current dataloader works. For that you might need a smaller dataset to debug, which you can find it here. This contains few samples of ImageNet and Alov dataset.
Once you understand the dataloaders, please refer here for more information on where to modify training script.
$ cd goturn-pytorch/src/scripts
# Modify the following variables in the script
# Path to imagenet
IMAGENET_PATH='/media/nthere/datasets/ISLVRC2014_Det/'
# Path to alov dataset
ALOV_PATH='/media/nthere/datasets/ALOV/'
# save path for models
SAVE_PATH='./caffenet/'
# open another terminal and run
$ visdom
# You can visualize the train/val images, loss curves,
# simply open https://localhost:8097 in your browser to visualize
# training
$ bash train.sh
Loss |
---|
In order to test the model, you can use the model in this link or you can use your trained model
$ mkdir goturn-pytorch/models
# Copy the extracted caffenet folder into models folder, if you are
# using the trained model
$ cd goturn-pytorch/src/scripts
$ bash demo_folder.sh
# once you run, select the bounding box on the frame using mouse,
# once you select the bounding box press 's' to start tracking,
# if the model lose tracking, please press 'p' to pause and mark the
# bounding box again as before and press 's' again to continue the
# tracking
# To test on a new video, you need to extract the frames from the video
# using ffmpeg or any other tool and modify folder path in
# demo_folder.sh accordingly