tangguoling / ADPT

Anti-drift pose tracker (ADPT): A transformer-based network for robust animal pose estimation cross-species
GNU General Public License v3.0
9 stars 0 forks source link

Anti-Drift Pose Tracker (ADPT): A transformer-based network for robust animal pose estimation cross-species

License

Key Findings

The analysis results demonstrate that ADPT significantly reduces body point drifting in animal pose estimation and outperforms existing deep learning methods such as DeepLabCut, SLEAP, and DeepPoseKit. Additionally, ADPT's anti-drift tracking is unbiased across different individuals and video background conditions, ensuring consistency for subsequent behavioral analyses. In performance evaluations on public datasets, ADPT exhibited higher accuracy in tracking body points, superior performance metrics in terms of required training data, and inference speed.

Furthermore, our team applied ADPT to end-to-end multi-animal identity-pose synchronized tracking, achieving over 90% accuracy in identity recognition. This end-to-end approach reduces computational costs compared to methods like ma-DLC, SIPEC, and Social Behavior Atlas, potentially enabling real-time multi-animal behavior analysis.

Usage

This tool provides Python scripts for training and predicting behavior videos. Users can simply open the corresponding environment and run the provided code to start training and predicting. You may need an NVIDIA GPU (RTX2080TI or better) and updated drivers.

Configure the ADPT virtual environment

conda create -n ADPT python==3.9
conda activate ADPT
pip install tensorflow==2.9.1
pip install tensorflow-addons==0.17.1
conda install cudnn==8.2.1
pip install imgaug
pip install pandas
pip install pyyaml
pip install tqdm

Train a model with ADPT

Use ADPT to predict videos

Data Preparation

During the data preparation phase, we do not provide specific annotation tools. Users can employ the DeepLabCut tool for data annotation and convert the annotations into our required JSON dataset using the provided code.

Provided Training and Test Data

We offer a shared training dataset and videos for testing purposes. These videos have been instrumental in our analysis of the model's anti-drift capabilities, as discussed in our research paper.

Citation

If you use this project in your research, please cite our paper:

Anti-drift pose tracker (ADPT): A transformer-based network for robust animal pose estimation cross-species
Authors: Tang Guoling, Yaning HanYaning, HanQuanying, LiuPengfei Wei
Published: May 2024
DOI: 10.7554/eLife.95709.1

License

This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.