vsitzmann / siren

Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
MIT License
1.72k stars 247 forks source link

Inconsistent written formula and implementation #55

Open hieu325 opened 2 years ago

hieu325 commented 2 years ago

In paper, it is written sin(\omega_0 W x + b).

But in implementation, the explore_siren notebook as well as modules.py, the output of linear layer is multiplied to \omega_0. In other words, sin(\omega_0 (Wx+b)).

I find this difference drastically changes the network convergence behavior. The actual implemented network performs much better.

Could you clarify this issue?