k2-fsa / k2

FSA/FST algorithms, differentiable, with PyTorch compatibility.
https://k2-fsa.github.io/k2
Apache License 2.0
1.11k stars 213 forks source link

Support for Pytorch 2.0 #1159

Closed ezerhouni closed 1 year ago

ezerhouni commented 1 year ago

Is there any plan to support pytorch 2.0 :

https://pytorch.org/get-started/pytorch-2.0/

The release should be beginning of march

Thank you !

csukuangfj commented 1 year ago

Yes, we will support PyTorch 2.0 once it is released.

I think PyTorch 2.0 is backward compatible, so it should be easy to support it.

GeorgeLiou commented 1 year ago

Yes, we will support PyTorch 2.0 once it is released.

I think PyTorch 2.0 is backward compatible, so it should be easy to support it.

Hi, csukuangfj,

pytorch 2.0 has been released, and even Anaconda supported python3.11 on numpy, scipy, scikit-learn....

It seems that pytorch 2.0 and python 3.11 has many improvements, which new features in pytorch 2.0 can we expected to be introduced to k2?

Thank you :)

csukuangfj commented 1 year ago

Is there any plan to support pytorch 2.0 :

https://pytorch.org/get-started/pytorch-2.0/

The release should be beginning of march

Thank you !

https://k2-fsa.github.io/k2/installation/pre-compiled-cuda-wheels-linux/2.0.0.html

This page contains pre-compiled k2 wheels for torch 2.0.

Basically, you don't need to change any code of k2 to support torch 2.0.

ezerhouni commented 1 year ago

@csukuangfj I tried using torch.compile and it crashed. I will double check as it was a quick test

csukuangfj commented 1 year ago

Yes, we will support PyTorch 2.0 once it is released.

I think PyTorch 2.0 is backward compatible, so it should be easy to support it.

Hi, csukuangfj,

pytorch 2.0 has been released,

and even Anaconda supported python3.11 on numpy, scipy, scikit-learn....

It seems that pytorch 2.0 and python 3.11 has many improvements,

which new features in pytorch 2.0 can we expected to be introduced to k2?

Thank you :)

I am afraid not.

Actually, the core part of k2 doesn't depend on torch. The C++ part uses torch to manage CUDA memory allocation, though we can use other allocators if available.

csukuangfj commented 1 year ago

@csukuangfj I tried using torch.compile and it crashed. I will double check as it was a quick test

Anything related to k2 used in the module that is called with torch.compile() will not work, the same thing goes to torch.jit.trace() and torch.jit.script().