dgasmith / opt_einsum

⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.
https://dgasmith.github.io/opt_einsum/
MIT License
863 stars 68 forks source link

Multiple GPUs #215

Closed alex-orca closed 6 months ago

alex-orca commented 1 year ago

Hi, I am using this package (via quimb) with pytorch as a backend for large contractions. I am able to run on 1 GPU using pytorch, but is there anyway to use multiple GPUs at once for distributed calculation of contractions? I am happy to change back end if needed! Thanks

jcmgray commented 1 year ago

Hi @alex-orca, automatic distributed contraction is quite a difficult task and I'm not aware of any purely 'just switch backend' based method, though would be happy to hear if anyone knows of one! I know there is work being done on it.

One option, which hasn't made it into opt_einsum yet, is to use 'slicing'. cotengra supports automatic 'slicing', and there is a MPI style example here. The overhead can be quite dependent on the actual contraction geometry, bear in mind.

dgasmith commented 6 months ago

I'm closing this as out of scope. We're here to optimize the path and nothing more echoing @jcmgray's answer.