vsitzmann / siren

Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
MIT License
1.72k stars 247 forks source link

Implementation of w0 #57

Closed bell-one closed 1 year ago

bell-one commented 1 year ago

Hello @vsitzmann ,

Thanks for your nice paper and implementations, Currently I try to use sine activation function on some implicit image restoration functions,

In here I have little question about w0 implementation

  1. Multiply of w0

In paper w0=30 in initial layer is represented with y = sin(w0*wx+b) in last sentence of paragraph 3.2 But in Implementation,

MetaSequential(BatchLinear(in_features, hidden_features), nl)) BatchLinear(in_features, hidden_features) for (wx+b) and nl make y = sin(w0*(wx+b))

I check that performance goes well in your distributed experiments, but is there any problems to use w0 multiply on bias? It just make smaller bias only?

  1. Initialize of w

In paper paragraph 3.2 and supplement 1.5

Initialize of w should goes with -sqrt(6/n), sqrt(6/n) in paragraph 3.2 or -sqrt(6/n)/w0, sqrt(6/n)/w0when use with w0 value,

but In distributed source,
first layer use w0=30 but it's initialization of w is -1/n , 1/n where can I find the reason for this first layer initialization?

bell-one commented 1 year ago

Duplicate of #55 and #43