yenchenlin / nerf-pytorch

A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
MIT License
5.53k stars 1.07k forks source link

In which hidden layer is there a skip connection? #115

Closed Touka20 closed 1 year ago

Touka20 commented 1 year ago

The paper and the official code both concatenate the input to the fifth layer’s activation. But maybe this PyTorch version actually concatenate it to the sixth layer?

this version, where skip=[4]

self.pts_linears = nn.ModuleList(
            [nn.Linear(input_ch, W)] + [nn.Linear(W, W) if i not in self.skips else nn.Linear(W + input_ch, W) for i in range(D-1)])

official code, where skip=[4]

for i in range(D):
    outputs = dense(W)(outputs)
    if i in skips:
        outputs = tf.concat([inputs_pts, outputs], -1)

image

Demonss3 commented 1 year ago

The i here is counted from 0, so when i=4, it refers to the fifth layer of activation layer, which is consistent with the version of the paper and tensorflow

Touka20 commented 1 year ago

The i here is counted from 0, so when i=4, it refers to the fifth layer of activation layer, which is consistent with the version of the paper and tensorflow

Thank you I got it! Actually I know that i is counted from 0, but I mistook some basic concepts. The input is concatenated to the 5th layer's output instead of input, so it's a part of the 6th layer's input. I think maybe it's time for me to reveal some basic but important concepts for deep learning haha.