Hi Choy. Thanks for your great work on this project! I'm trying to calculate the second derivatives of sparse tensor operations (e.g. sparse convolution). I found that the backward operation of sparse convolution doesn't itself have a backward operation, which makes it impossible to calculate the second derivative. Are there some quick hacks I can use to calculate it then? For example, if the current C++/CUDA code can be reused, I could write a backward function that itself has a backward function calling into the existing C++/CUDA code.
Hi Choy. Thanks for your great work on this project! I'm trying to calculate the second derivatives of sparse tensor operations (e.g. sparse convolution). I found that the backward operation of sparse convolution doesn't itself have a backward operation, which makes it impossible to calculate the second derivative. Are there some quick hacks I can use to calculate it then? For example, if the current C++/CUDA code can be reused, I could write a backward function that itself has a backward function calling into the existing C++/CUDA code.
https://github.com/NVIDIA/MinkowskiEngine/blob/21f3930d8bb7d27e844b21aeaf7c1a444576d853/MinkowskiEngine/MinkowskiConvolution.py#L38-L111