Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fred Nicolls, Alexander Mathis, Mackenzie W. Mathis, Amir Patel
AcinoSet is a dataset of free-running cheetahs in the wild that contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames. We utilize markerless animal pose estimation with DeepLabCut to provide 2D keypoints (in the 119K frames). Then, we use three methods that serve as strong baselines for 3D pose estimation tool development: traditional sparse bundle adjustment, an Extended Kalman Filter, and a trajectory optimization-based method we call Full Trajectory Estimation. The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided. We believe this dataset will be useful for a diverse range of fields such as ecology, robotics, biomechanics, as well as computer vision.
fte.pickle
, have a related (n)_cam_scene_sba.json
file, and can be loaded in the GUI.The following sections document how this was created by the code within this repo:
full_cheetah
model provided in the DLC Model Zoo to re-create the existing H5 files (or on new videos). If you want to label more cheetah data, you can also do so within the DeepLabCut framework. We provide a conda file for an easy-install, but please see the repo for installation and instructions for use.
$ conda env create -f conda_envs/DLC.yml -n DLC
Navigate to the AcinoSet folder and build the environment:
$ conda env create -f conda_envs/acinoset.yml
Launch Jupyter Lab:
$ jupyter lab
Open calib_with_gui.ipynb
and follow the instructions.
Alternatively, if the checkerboard points detected in calib_with_gui.ipynb
are unsatisfactory, open saveMatlabPointsForAcinoSet.m
in MATLAB and follow the instructions. Note that this requires MATLAB 2020b or later.
You can manually define points on each video in a scene with Argus Clicker. A quick tutorial is found here.
Build the environment:
$ conda env create -f conda_envs/argus.yml
Launch Argus Clicker:
$ python
>>> import argus_gui as ag; ag.ClickerGUI()
Keyboard Shortcuts (See documentation here for more):
G
... to a specific frameX
... to switch the sync mode setting the windows to the same frameO
... to bring up the options dialogS
... to bring up a save dialogThen you must convert the output data from Argus to work with the rest of the pipeline (here is an example):
$ python argus_converter.py \
--data_dir ../data/2019_03_07/extrinsic_calib/argus_folder
To reconstruct a cheetah into 3D, we offer three different pose estimation options on top of standard triangulation (TRI):
You can run each option seperately. For example, simply open FTE.ipynb
and follow the instructions!
Otherwise, you can run all types of refinements in one go:
python all_optimizations.py --data_dir 2019_03_09/lily/run --start_frame 70 --end_frame 170 --dlc_thresh 0.5
NB: When running the FTE, we recommend that you use the MA86 solver. For details on how to set this up, see these instructions.
We ask that if you use our code or data, kindly cite (and note it is accepted to ICRA 2021, so please check back for an updated ref):
@misc{joska2021acinoset,
title={AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the Wild},
author={Daniel Joska and Liam Clark and Naoya Muramatsu and Ricardo Jericevich and Fred Nicolls and Alexander Mathis and Mackenzie W. Mathis and Amir Patel},
year={2021},
eprint={2103.13282},
archivePrefix={arXiv},
primaryClass={cs.CV}
}