dgasmith / opt_einsum

⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.
https://dgasmith.github.io/opt_einsum/
MIT License
863 stars 68 forks source link

opt_einsum's capability of using multiple cores / gpus for MTTKRP-like operation between sparse tensor and dense factor matrices? #113

Closed JunhaoWang closed 4 years ago

JunhaoWang commented 5 years ago

If I have sparse tensor and sparse or dense factor matrices:

abc, bz, cz -> az (used in CP decomposition)

where abc is sparse, but bz and cz are dense or sparse, how would opt_einsum handle the contraction? And would it utilize multiple cpus and/or gpus in this contraction?

same question goes for

abc, def, be, cf -> ad (used in Tucker decomposition)

where abc is sparse, but other components could be dense or sparse.

jcmgray commented 5 years ago

It would pretty much be up to whatever libraries define the dense and sparse matrices. The main work opt_einsum does is find the contraction order, then call a combination of tensordot / einsum / transpose taken from whichever library the array objects are defined in (by default, but you can explicitly specify where to look for these functions as well).