This feature should be similar to the following 2 examples achieved using only Pythonn API [1][2]
The feature will introduce the capability of defining new differentiable operators for scalar and NDArrays in Java or C++. Many operators cannot be synthesised using existing NDArray API (e.g. sine, cosine, discrete Fourier transform) despite that they are used frequently in production and are well know autogradable operators.
Will this change the current api? How?
I have only read the MXNet backend. My impression at the moment is that existing NDArray API prioritise being engine agnostic, most of which are generated from c++ code. This seems to indicate that the jnarator framework should be exposed as a compiler-level plugin to advanced users, who will write C++ implementatios & headers for new functions, and dynamically inject them into a DJL abstraction that yields NDArray as an output.
Who will benefit from this enhancement?
Research scientists who handcraft autograd kernels, ML engineers who frequently uses DFT layers for feature extraction, data augmentation, rotation invariance. Performance optimisation engineers who like to accelerate conv layer by winograd.
Description
This feature should be similar to the following 2 examples achieved using only Pythonn API [1][2]
The feature will introduce the capability of defining new differentiable operators for scalar and NDArrays in Java or C++. Many operators cannot be synthesised using existing NDArray API (e.g. sine, cosine, discrete Fourier transform) despite that they are used frequently in production and are well know autogradable operators.
Will this change the current api? How?
I have only read the MXNet backend. My impression at the moment is that existing NDArray API prioritise being engine agnostic, most of which are generated from c++ code. This seems to indicate that the jnarator framework should be exposed as a compiler-level plugin to advanced users, who will write C++ implementatios & headers for new functions, and dynamically inject them into a DJL abstraction that yields NDArray as an output.
Who will benefit from this enhancement?
Research scientists who handcraft autograd kernels, ML engineers who frequently uses DFT layers for feature extraction, data augmentation, rotation invariance. Performance optimisation engineers who like to accelerate conv layer by winograd.
References
[1] https://mxnet.apache.org/versions/1.6/api/python/docs/tutorials/extend/customop.html
[2] https://pytorch.org/docs/stable/notes/extending.html
[3] https://github.com/apache/incubator-mxnet/issues/12045