git clone git@github.com:alantrrs/OpenTLD.git cd OpenTLD mkdir build cd build cmake ../src/ make cd ../bin/ %To run from camera ./run_tld -p ../parameters.yml -tl %To run from file ./run_tld -p ../parameters.yml -s ../datasets/06_car/car.mpg -tl %To init bounding box from file ./run_tld -p ../parameters.yml -s ../datasets/06_car/car.mpg -b ../datasets/06_car/init.txt -tl %To train only in the firs frame (no tracking, no learning) ./run_tld -p ../parameters.yml -s ../datasets/06_car/car.mpg -b ../datasets/06_car/init.txt %To test the final detector (Repeat the video, first time learns, second time detects) ./run_tld -p ../parameters.yml -s ../datasets/06_car/car.mpg -b ../datasets/06_car/init.txt -tl -r
The output of the program is a file called bounding_boxes.txt which contains all the detections made through the video. This file should be compared with the ground truth file to evaluate the performance of the algorithm. This is done using a python script: python ../datasets/evaluate_vis.py ../datasets/06_car/car.mpg bounding_boxes.txt ../datasets/06_car/gt.txt
To Zdenek Kalal for realeasing his awesome algorithm