[Sep 2024] SAM 2 integration for video tracking
[Sep 2024] DAMM accepted into Scientific Reports!
# create conda enviornent
conda create -n DAMM python=3.10
conda activate DAMM
#get codebase
git clone https://github.com/backprop64/DAMM
cd DAMM
# setup SAM 2
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
git clone https://github.com/facebookresearch/segment-anything-2.git
cd segment-anything-2
pip install .
# setup detectron2, torch, opencv
conda install conda-forge::detectron2
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
conda install conda-forge::opencv
# make everything importable
cd -
python setup.py install
# detect any mouse model/config
wget https://www.dropbox.com/s/39a690qldduxawz/DAMM_weights.pth
wget https://www.dropbox.com/s/wegw8l5zq3vqln0/DAMM_config.yaml
# sam model weights (models below are ordered from smallest to largest, and you only need 1)
wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt #(associated config: sam2_hiera_tiny.yaml)
wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt #(associated config: sam2_hiera_small.yaml)
wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt #(associated config: sam2_hiera_base_plus.yaml)
wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt #(associated config: sam2_hiera_large.yaml)
*tips: to use less compute and get your analysis faster, it would be a good idea to (1) try the small models first (2) figure out the lowest FPS that results in sucessfull tracking. The tradeoff is larger models will provide better results.
from DAMM.tracking import PromptableVideoTracker
#initilize tracking setup
mouse_tracker = PromptableVideoTracker(
# sam_config: you dont need to download this or specify a full path, look for associated config file above
sam_config="sam2_hiera_l.yaml",
sam_checkpoint= "path/to/sam2_hiera_large.pt",
damm_config="path/to/DAMM_config.yaml",
damm_checkpoint="path/to/DAMM_weights.pth"
)
# Track the first 250 frames of demo_video.mp4
# Save the output and visualization to the output_dir
mouse_tracker.predict_video(
video_path='demo_video.mp4',
output_dir='demo_output/',
batch_size=64,
start_frame=0,
end_frame=250,
visualize=True
)
conda activate DAMM
cd path/to/DAMM
python track_mice.py \
--sam_config "path/to/sam2_hiera_l.yaml" \ # Path to the SAM configuration file
--sam_checkpoint "path/to/sam2_hiera_large.pt" \ # Path to the SAM checkpoint file
--damm_config "path/to/DAMM_config.yaml" \ # Path to the DAMM configuration file
--damm_checkpoint "path/to/DAMM_weights.pth" \ # Path to the DAMM weights file
--video_input "path/to/input/video.mp4" \ # Path to the input video file (required)
--output_dir "path/to/output/directory/" \ # Directory for output results
--start_frame 0 \ # Starting frame for processing
--end_frame 1000 \ # Ending frame for processing
--visualize true # Whether to visualize the output (true/false)
Use this notebook to track mice in videos. You can either use our default DAMM weights (will be automatically downloaded into the notebook), or use your own weights (created using the fine-tuning notebook; see below).
Use this notebook to create a dataset, annotate bounding boxes, and fine-tune an object detection model. The fined tuned model can be used for tracking in this notebook, or in the Tracking Notebook.
If our DAMM tool was useful, please cite us!
@article{kaul2024damm,
author = {Gaurav Kaul and Jonathan McDevitt and Justin Johnson and Ada Eban-Rothschild},
title = {DAMM for the detection and tracking of multiple animals within complex social and environmental settings},
journal = {Scientific Reports},
volume = {14},
pages = {21366},
year = {2024},
doi = {10.1038/s41598-024-72367-2},
url = {https://doi.org/10.1038/s41598-024-72367-2},
}