First of all thanks for the amazing paper and sharing the code.
I have a question regarding computation of gradients (and higher order derivatives) w.r.t. the input. In the code you seem to be using autograd to compute them (diff_operators.py). However, in section 2 of the supplementary material of the paper you provide an explicit formula (another SIREN) that represents the gradient.
Did you consider implementing a general mechanism that inputs a SIREN network and outputs a new SIREN network representing its gradient while sharing the trainable parameters of the input network? One could basically obtain derivatives of any order this way and use them easily in training. Maybe I am missing something though.
Autodiff produces a solution that is equivalent to the derivation in the supplement while being very easy to implement, which is why we used it for the paper.
First of all thanks for the amazing paper and sharing the code.
I have a question regarding computation of gradients (and higher order derivatives) w.r.t. the input. In the code you seem to be using autograd to compute them (
diff_operators.py
). However, in section 2 of the supplementary material of the paper you provide an explicit formula (another SIREN) that represents the gradient.Did you consider implementing a general mechanism that inputs a SIREN network and outputs a new SIREN network representing its gradient while sharing the trainable parameters of the input network? One could basically obtain derivatives of any order this way and use them easily in training. Maybe I am missing something though.
Thanks!