This is the official repository for the ECCV'24 paper: Domain-Adaptive 2D Human Pose Estimation via Dual Teachers in Extremely Low-Light Conditions.
The code is developed using python 3.11.8 on Ubuntu 20.04, and our model is trained on four NVIDIA RTX 3090 GPUs. Other platforms have not been fully tested.
pip install -r requirements.txt
pip install -e .
mkdir output
mkdir log
${POSE_ROOT}
|-- model
`-- |-- imagenet
| `-- hrnet_w32-36af842e.pth
`--exlpose
`-- model_final.pth
Please organize your project directory tree as follows:
${POSE_ROOT}
├── model
├── experiments
├── lib
├── tools
├── output
├── README.md
└── requirements.txt
For ExLPose data, please download from ExLPose download. Extract them under {DATASET_ROOT}, and make them look like this:
${DATASET_ROOT}
|-- Annotations
| |-- ExLPose_test_LL-A.json
| |-- ExLPose_test_LL-E.json
| |-- ...
| `-- ExLPose_train_WL.json
|-- ExLPose
| |-- bright
| | |-- imgs_0119_3_vid000002_exp100_bright_000052__gain_0.00_exposure_1000.png
| | |-- ...
| | `-- imgs_0212_hwangridan_vid000021_exp1200_bright_000092__gain_28.18_exposure_417.png
| `-- dark
| |-- imgs_0119_3_vid000002_exp100_dark_000052__gain_0.00_exposure_1000.png
| |-- ...
| `-- imgs_0212_hwangridan_vid000021_exp1200_dark_000092__gain_6.60_exposure_417.png
`-- ExLPose-OCN
|-- A7M3
| |-- 0822_DSC07102.JPG
| |-- ...
| `-- 0829_DSC08058.JPG
`-- RICOH3
|-- 0825_R0000208.JPG
|-- ...
`-- 0829_R0000662.JPG
Note that the default testing configuration uses 4 GPUs. Please adjust this according to your machine’s specifications.
python tools/valid_test.py --cfg experiments/exlpose/test_config.yaml \
TEST.MODEL_FILE model/exlpose/model_final.pth DATASET.ROOT ${DATASET_ROOT} \
TEST.NMS_THRE 0.15 TEST.SCALE_FACTOR 0.5,1,2 TEST.MATCH_HMP True DATASET.TEST all
Set DATASET.TEST to 'normal', 'hard' or 'extreme' to evaluate on LL-N, LL-H and LL-E splits.
python tools/valid_ocn.py --cfg experiments/exlpose/test_config.yaml \
TEST.MODEL_FILE model/exlpose/model_final.pth DATASET.ROOT ${DATASET_ROOT} \
TEST.NMS_THRE 0.15 TEST.SCALE_FACTOR 0.5,1,2 TEST.MATCH_HMP True DATASET.TEST RICOH3
Set DATASET.TEST to 'A7M3' to evaluate on A7M3 splits.
Pre-Training Stage: Training the main teacher on well-lit image
python tools/train_stage1_PT.py --cfg experiments/exlpose/PT_stage_config.yaml \
DATASET.ROOT ${DATASET_ROOT}
Pre-Training Stage: Training the main teacher on fake low-light image
python tools/train_stage1_PT.py --cfg experiments/exlpose/PT_stage_config.yaml \
DATASET.ROOT ${DATASET_ROOT} TRAIN.STAGE PT_LL MODEL.PRETRAINED_MAIN ${MAIN_WEIGHTS_FILE}
Pre-Training Stage: Training the comp. teacher on well-lit image
python tools/train_stage1_PT.py --cfg experiments/exlpose/PT_stage_config.yaml \
DATASET.ROOT ${DATASET_ROOT} MODEL.NAME hrnet_comp
Pre-Training Stage: Training the comp. teacher on fake low-light image
python tools/train_stage1_PT.py --cfg experiments/exlpose/PT_stage_config.yaml \
DATASET.ROOT ${DATASET_ROOT} TRAIN.STAGE PT_LL MODEL.NAME hrnet_comp \
MODEL.PRETRAINED_COMP ${COMP_WEIGHTS_FILE} \
KA Stage:
python tools/train_stage2_KA.py --cfg experiments/exlpose/KA_stage_config.yaml \
DATASET.ROOT ${DATASET_ROOT} MODEL.PRETRAINED_MAIN ${MAIN_WEIGHTS_FILE} \
MODEL.PRETRAINED_COMP ${COMP_WEIGHTS_FILE}
Our code is mainly based on DEKR.
@inproceedings{DA-LLPose,
title={Domain-Adaptive 2D Human Pose Estimation via Dual Teachers in Extremely Low-Light Conditions},
author={Ai, Yihao and Qi, Yifei and Wang, Bo and Chen, Yu and Wang, Xinchao and Tan, Robby T.},
booktitle={ECCV},
year={2024},
}