goombalab / hydra

Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"
100 stars 7 forks source link

Request for deterministic support in 'cumsum_cuda_kernel' #13

Open Liu-zhi-chao opened 2 months ago

Liu-zhi-chao commented 2 months ago

I’m encountering a warning related to the torch.cumsum operation when running a specific piece of code. Here is the warning message:

`/seu_share/home/gengkeke/220220324/anaconda3/lib/python3.10/site-packages/mamba_ssm/ops/triton/ssd_chunk_state.py:845: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:83.) torch.cumsum(ddA_cumsum, dim=-1, out=ddA_cumsum)

/seu_share/home/gengkeke/220220324/anaconda3/lib/python3.10/site-packages/mamba_ssm/ops/triton/ssd_combined.py:437: UserWarning: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:83.) ddA_prev = ddA_cumsum_prev.flip([-1]).cumsum(dim=-1).flip([-1]) `

While running the code, I encountered the warning above regarding cumsum_cuda_kernel not having a deterministic implementation. The warning suggests filing an issue here, so I wanted to ask if there’s any workaround or plan to add deterministic support for this operation.

I appreciate your help and any guidance you can provide!

altaic commented 2 months ago

That's being discussed here: https://github.com/state-spaces/mamba/issues/137

Liu-zhi-chao commented 2 months ago

That's being discussed here: state-spaces/mamba#137

Thanks for your reply, it is very helpful to me!