(CLIP) fumon@LAPTOP-2S5HFEN5:~/Chinese-CLIP-master/Chinese-CLIP-master$ bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh DATAPATH
/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects --local_rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
Loading vision model config from cn_clip/clip/model_configs/RN50.json
Loading text model config from cn_clip/clip/model_configs/RBT3-chinese.json
Traceback (most recent call last):
File "cn_clip/training/main.py", line 350, in
main()
File "cn_clip/training/main.py", line 135, in main
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_device_rank], find_unused_parameters=find_unused_parameters)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in init
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28405) of binary: /home/fumon/anaconda3/envs/CLIP/bin/python
Traceback (most recent call last):
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
elastic_launch(
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
(CLIP) fumon@LAPTOP-2S5HFEN5:~/Chinese-CLIP-master/Chinese-CLIP-master$ bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh DATAPATH /home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run. Note that --use_env is set by default in torch.distributed.run. If your script expects
--local_rank
argument to be set, please change it to read fromos.environ['LOCAL_RANK']
instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructionswarnings.warn( Loading vision model config from cn_clip/clip/model_configs/RN50.json Loading text model config from cn_clip/clip/model_configs/RBT3-chinese.json Traceback (most recent call last): File "cn_clip/training/main.py", line 350, in
main()
File "cn_clip/training/main.py", line 135, in main
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_device_rank], find_unused_parameters=find_unused_parameters)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in init
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28405) of binary: /home/fumon/anaconda3/envs/CLIP/bin/python
Traceback (most recent call last):
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
elastic_launch(
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
cn_clip/training/main.py FAILED
Root Cause: [0]: time: 2024-03-26_21:46:34 rank: 0 (local_rank: 0) exitcode: 1 (pid: 28405) error_file: <N/A> msg: "Process failed with exitcode 1"
Other Failures: