However, I am getting a pytorch module deterministic implementation error. I would like to know how exactly this checkpoint was built. Was it built deterministically and if so do I forcefully ignore the implementation error and how do I force this without modifying mmengine?
After looking at the commit history it looks like @Dai-Wenxun is the one who created the checkpoint in the docs,
raceback (most recent call last):
File "/home/Lawrence/mmaction2/tools/train.py", line 143, in
main()
File "/home/Lawrence/mmaction2/tools/train.py", line 139, in main
runner.train()
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/loops.py", line 96, in run
self.run_epoch()
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
self.run_iter(idx, data_batch)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/loops.py", line 128, in run_iter
outputs = self.runner.model.train_step(
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 116, in train_step
optim_wrapper.update_params(parsed_losses)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/optim/optimizer/optimizer_wrapper.py", line 196, in update_params
self.backward(loss)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/optim/optimizer/optimizer_wrapper.py", line 220, in backward
loss.backward(**kwargs)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: max_pool3d_with_indices_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or
you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.
Branch
main branch (1.x version, such as
v1.0.0
, ordev-1.x
branch)Prerequisite
Environment
sys.platform: linux Python: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0,1,2,3,4,5,6,7: Quadro RTX 8000 CUDA_HOME: /usr NVCC: Cuda compilation tools, release 11.8, V11.8.89 GCC: x86_64-linux-gnu-gcc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0 PyTorch: 2.1.0+cu118 PyTorch compiling details: PyTorch built with:
TorchVision: 0.16.0+cu118 OpenCV: 4.8.1 MMEngine: 0.9.0 MMAction2: 1.2.0+4d6c934 MMCV: 2.1.0 MMDetection: 3.2.0 MMPose: 1.2.0
Describe the bug
I am looking to reproduce a checkpoint. This is the link to the checkpoint https://download.openmmlab.com/mmaction/v1.0/skeleton/posec3d/slowonly_r50_8xb16-u48-240e_ntu60-xsub-keypoint/slowonly_r50_8xb16-u48-240e_ntu60-xsub-keypoint_20220815-38db104b.pth
However, I am getting a pytorch module deterministic implementation error. I would like to know how exactly this checkpoint was built. Was it built deterministically and if so do I forcefully ignore the implementation error and how do I force this without modifying mmengine?
After looking at the commit history it looks like @Dai-Wenxun is the one who created the checkpoint in the docs,
Reproduces the problem - command or script
CUBLAS_WORKSPACE_CONFIG=:4096:8 python tools/train.py configs/skeleton/posec3d/slowonly_r50_8xb16-u48-240e_ntu60-xsub-keypoint.py --work-dir workdir/PoseC3DRetrain --cfg-options gpu_ids="[4]" --seed 0 --deterministic
Reproduces the problem - error message
raceback (most recent call last):
File "/home/Lawrence/mmaction2/tools/train.py", line 143, in
main()
File "/home/Lawrence/mmaction2/tools/train.py", line 139, in main
runner.train()
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/loops.py", line 96, in run
self.run_epoch()
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
self.run_iter(idx, data_batch)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/runner/loops.py", line 128, in run_iter
outputs = self.runner.model.train_step(
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 116, in train_step
optim_wrapper.update_params(parsed_losses)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/optim/optimizer/optimizer_wrapper.py", line 196, in update_params
self.backward(loss)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/mmengine/optim/optimizer/optimizer_wrapper.py", line 220, in backward
loss.backward(**kwargs)
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/torch/_tensor.py", line 492, in backward
torch.autograd.backward(
File "/home/Lawrence/openmmenv/lib/python3.10/site-packages/torch/autograd/init.py", line 251, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: max_pool3d_with_indices_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.