I’m encountering a warning related to the torch.cumsum operation when running a specific piece of code. Here is the warning message:
`/seu_share/home/gengkeke/220220324/anaconda3/lib/python3.10/site-packages/mamba_ssm/ops/triton/ssd_chunk_state.py:845: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:83.)
torch.cumsum(ddA_cumsum, dim=-1, out=ddA_cumsum)
/seu_share/home/gengkeke/220220324/anaconda3/lib/python3.10/site-packages/mamba_ssm/ops/triton/ssd_combined.py:437: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:83.)
ddA_prev = ddA_cumsum_prev.flip([-1]).cumsum(dim=-1).flip([-1])
`
While running the code, I encountered the warning above regarding cumsum_cuda_kernel not having a deterministic implementation. The warning suggests filing an issue here, so I wanted to ask if there’s any workaround or plan to add deterministic support for this operation.
I appreciate your help and any guidance you can provide!
I’m encountering a warning related to the torch.cumsum operation when running a specific piece of code. Here is the warning message:
`/seu_share/home/gengkeke/220220324/anaconda3/lib/python3.10/site-packages/mamba_ssm/ops/triton/ssd_chunk_state.py:845: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:83.) torch.cumsum(ddA_cumsum, dim=-1, out=ddA_cumsum)
/seu_share/home/gengkeke/220220324/anaconda3/lib/python3.10/site-packages/mamba_ssm/ops/triton/ssd_combined.py:437: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:83.) ddA_prev = ddA_cumsum_prev.flip([-1]).cumsum(dim=-1).flip([-1]) `
While running the code, I encountered the warning above regarding cumsum_cuda_kernel not having a deterministic implementation. The warning suggests filing an issue here, so I wanted to ask if there’s any workaround or plan to add deterministic support for this operation.
I appreciate your help and any guidance you can provide!