The implementation of unfoldNd relies on one-hot convolution. This means the convolution kernels are highly sparse. Hence, the code could run faster when using sparse tensors.
Open questions:
What is the result of a sparse convolution with a dense input? If it's a dense tensor, that would be good.
Does using sparse tensors provide a benefit in terms of run time? (related: #4)
The implementation of
unfoldNd
relies on one-hot convolution. This means the convolution kernels are highly sparse. Hence, the code could run faster when using sparse tensors.Open questions: