ViTAE-Transformer / ViTPose

The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation" and [TPAMI'23] "ViTPose++: Vision Transformer for Generic Body Pose Estimation"
Apache License 2.0
1.37k stars 186 forks source link

Can't run the single machine experiment #116

Open TimurAkhtemov opened 1 year ago

TimurAkhtemov commented 1 year ago

I am attempting to train a model using the tools/dist_train.sh script as documented, but I'm encountering several errors and warnings during execution. Below is the command I used and the corresponding output:

bash tools/dist_train.sh /home/vuetech/Desktop/vuetech/ViTAE_model/ViTPose/configs/wholebody/2d_kpt_sview_rgb_img/associative_embedding/coco-wholebody/higherhrnet_coco-wholebody.yml 2 --cfg-options model.pretrained=/home/vuetech/Desktop/vuetech/ViTAE_model/ViTPose/vitpose+_small.pth --seed 0

Errors and Warnings:

Deprecation Warning:

The script triggers a deprecation warning regarding torch.distributed.launch being deprecated and advises the use of torchrun instead. It also suggests replacing the --local-rank argument with os.environ['LOCAL_RANK']. Apex Not Installed:

Multiple instances of apex is not installed are printed to the console, indicating a missing dependency on NVIDIA Apex. Missing MultiScaleDeformableAttention:

A warning about missing MultiScaleDeformableAttention suggests a missing dependency on mmcv-full. Unrecognized Arguments:

The error unrecognized arguments: --local-rank=1 and unrecognized arguments: --local-rank=0 are thrown, indicating an issue with argument parsing in train.py. ChildFailedError:

A torch.distributed.elastic.multiprocessing.errors.ChildFailedError is thrown, likely as a result of one of the above issues causing the training script to fail.

Here is the full error message:

/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects --local-rank argument to be set, please change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions

warnings.warn( [2023-10-13 09:20:58,176] torch.distributed.run: [WARNING] [2023-10-13 09:20:58,176] torch.distributed.run: [WARNING] [2023-10-13 09:20:58,176] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. [2023-10-13 09:20:58,176] torch.distributed.run: [WARNING] apex is not installed apex is not installed apex is not installed /home/vuetech/Desktop/vuetech/ViTAE_model/mmcv/mmcv/cnn/bricks/transformer.py:27: UserWarning: Fail to import MultiScaleDeformableAttention from mmcv.ops.multi_scale_deform_attn, You should install mmcv-full if you need this module. warnings.warn('Fail to import MultiScaleDeformableAttention from ' apex is not installed apex is not installed apex is not installed /home/vuetech/Desktop/vuetech/ViTAE_model/mmcv/mmcv/cnn/bricks/transformer.py:27: UserWarning: Fail to import MultiScaleDeformableAttention from mmcv.ops.multi_scale_deform_attn, You should install mmcv-full if you need this module. warnings.warn('Fail to import MultiScaleDeformableAttention from ' usage: train.py [-h] [--work-dir WORK_DIR] [--resume-from RESUME_FROM] [--no-validate] [--gpus GPUS | --gpu-ids GPU_IDS [GPU_IDS ...] | --gpu-id GPU_ID] [--seed SEED] [--deterministic] [--cfg-options CFG_OPTIONS [CFG_OPTIONS ...]] [--launcher {none,pytorch,slurm,mpi}] [--local_rank LOCAL_RANK] [--autoscale-lr] config train.py: error: unrecognized arguments: --local-rank=1 usage: train.py [-h] [--work-dir WORK_DIR] [--resume-from RESUME_FROM] [--no-validate] [--gpus GPUS | --gpu-ids GPU_IDS [GPU_IDS ...] | --gpu-id GPU_ID] [--seed SEED] [--deterministic] [--cfg-options CFG_OPTIONS [CFG_OPTIONS ...]] [--launcher {none,pytorch,slurm,mpi}] [--local_rank LOCAL_RANK] [--autoscale-lr] config train.py: error: unrecognized arguments: --local-rank=0 [2023-10-13 09:21:03,190] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 2) local_rank: 0 (pid: 9764) of binary: /home/vuetech/Desktop/vuetech/environments/ta_venv/bin/python Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in main() File "/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main launch(args) File "/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch run(args) File "/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/vuetech/Desktop/vuetech/environments/ta_venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

tools/train.py FAILED

Failures: [1]: time : 2023-10-13_09:21:03 host : vuetech-desktop rank : 1 (local_rank: 1) exitcode : 2 (pid: 9765) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure): [0]: time : 2023-10-13_09:21:03 host : vuetech-desktop rank : 0 (local_rank: 0) exitcode : 2 (pid: 9764) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

douyueyang commented 10 months ago

I have encountered the same problem as you

Mabroukiimen commented 5 months ago

Hello, did you get to solve it ?

lawrence-fw commented 3 weeks ago

I too have encountered this issue. Is there a known fix?