vsitzmann / siren

Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
MIT License
1.74k stars 247 forks source link

Computing gradient without autograd #13

Closed jankrepl closed 4 years ago

jankrepl commented 4 years ago

First of all thanks for the amazing paper and sharing the code.

I have a question regarding computation of gradients (and higher order derivatives) w.r.t. the input. In the code you seem to be using autograd to compute them (diff_operators.py). However, in section 2 of the supplementary material of the paper you provide an explicit formula (another SIREN) that represents the gradient.

Did you consider implementing a general mechanism that inputs a SIREN network and outputs a new SIREN network representing its gradient while sharing the trainable parameters of the input network? One could basically obtain derivatives of any order this way and use them easily in training. Maybe I am missing something though.

Thanks!

jnpmartel commented 4 years ago

Autodiff produces a solution that is equivalent to the derivation in the supplement while being very easy to implement, which is why we used it for the paper.