k2-fsa / k2

FSA/FST algorithms, differentiable, with PyTorch compatibility.
https://k2-fsa.github.io/k2
Apache License 2.0
1.1k stars 214 forks source link

any possibility / estimation of work necessary to support `mps` device #1243

Open drahnreb opened 11 months ago

drahnreb commented 11 months ago

This is more like a feature request / discussion.

Apple Silicon might not be the best hardware to train FSA/FST models and k2 does not officially support training on the metal shader accelerated devices.

I tried running a training for a stateless transducer zipformer with the necessary device arguments (instead of cpu) and ran into problems e.g. with the RaggedArray implementation here.

What would be a rough estimate to rudimentary implement mps support? Could anybody give me pointers where to start?

pzelasko commented 11 months ago

My 2c, surely Dan, Fangjun, and others could tell you more: besides the need to re-write the kernels for metal, you'd have to somehow work around the design based on lambdas. CUDA and moderngpu let you write the kernels as lambda functions, you'd need to figure out how to make metal work with that (IIRC k2 heavily relies on moderngpu).

csukuangfj commented 11 months ago

My 2c, surely Dan, Fangjun, and others could tell you more: besides the need to re-write the kernels for metal, you'd have to somehow work around the design based on lambdas. CUDA and moderngpu let you write the kernels as lambda functions, you'd need to figure out how to make metal work with that (IIRC k2 heavily relies on moderngpu).

Yes, Piotr is right.