But in implementation, the explore_siren notebook as well as modules.py, the output of linear layer is multiplied to \omega_0. In other words, sin(\omega_0 (Wx+b)).
I find this difference drastically changes the network convergence behavior. The actual implemented network performs much better.
In paper, it is written sin(\omega_0 W x + b).
But in implementation, the explore_siren notebook as well as modules.py, the output of linear layer is multiplied to \omega_0. In other words, sin(\omega_0 (Wx+b)).
I find this difference drastically changes the network convergence behavior. The actual implemented network performs much better.
Could you clarify this issue?