vsitzmann / siren

Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
MIT License
1.72k stars 247 forks source link

nn.Embedding layer initialization #63

Open ivanstepanovftw opened 6 months ago

ivanstepanovftw commented 6 months ago

I have a questions regarding nn.Embedding layer. It is a layer that mimics F.one_hot + nn.Linear(..., bias=False) and implemented as a lookup table, a magnitude faster than one hot + linear combination.

By default, nn.Embedding is initialized as a normal distribution (mean 0, std 1). However, first layer of SIREN expects uniform distributed input at interval [-1, 1].

  1. Should I initialize nn.Embedding as embedding.weight.uniform_(-1, 1) to match expectations of SIREN for input distributions?
  2. Can I use nn.Embedding as a first layer of SIREN, and initialize it as proposed - embedding.weight.uniform_(-1 / in_features, 1 / in_features) - to get rid of two linear layers without non-linearity in between?