yuehaowang / RecolorNeRF

RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color Editing of 3D Scenes
https://sites.google.com/view/recolornerf
42 stars 6 forks source link

Confusion about palette output dimensions #8

Open BCL123456-BAL opened 1 year ago

BCL123456-BAL commented 1 year ago

models/palette_tensoRF.py class PLTRender(torch.nn.Module): def init(): self.n_dim = 3 + len_palette ...... layer1 = torch.nn.Linear(self.in_mlpC, featureC) layer2 = torch.nn.Linear(featureC, featureC) layer3 = torch.nn.Linear(featureC, lenpalette - 1) torch.nn.init.constant(layer3.bias, 0) self.mlp = torch.nn.Sequential( layer1, torch.nn.LeakyReLU(inplace=True), layer2, torch.nn.LeakyReLU(inplace=True), layer3) self.n_dim += 1 I recently tried to integrate this work of yours on instant-ngp. But I do not understand the role of self.n_dim, and why the output dimension of layer3 is len_palette - 1. Besides,why is the activation function after each layer LeakyReLU, and the Relu function is also used in TensoRF.

Hope you can help me

Best wish!

yuehaowang commented 1 year ago

Thanks for your question.

  1. n_dim is the total number of output channels, including color, palette weights, sparsity, etc.
  2. Since the weight of the last palette color is computed from other weights through alpha blending, the MLP layers only need to output len_palette - 1 channels.
  3. I think both LeakyReLU and ReLU could work.