Closed LoickCh closed 6 months ago
I have the same question
It's not a bug. Indeed it doesn't matter at all even if collected_semantic_feature
is dropped here.
The proposed method mainly focuses on distilling features to 3D GS rather than training the 3D GS with feature rendering loss. Therefore, there is no need to apply the gradient of feature loss to the alpha attributes of Gaussians, which are optimized by RGB rendering loss. The only potential usage of collected_semantic_feature
is to record the feature value and compute backpropagation gradients to alpha but this may damage the reconstruction of 3D GS.
Okk, thank you for your answer. I just wanted to be sure it was done on purpose. Removing collected_semantic_feature
, might help to clarify
Yes, we don't allocate the shared memory for semantic_feature as they may cause the OOM if too high dimension. Instead, we allocate the global memory for it, which is why the implementation is different from the other variables.
Hey, Thank you for your great work. In rasterizer_impl.cu, we allocate memory to collected_semantic_feature.
Then we directly get the value in the renderCUDA of backward.cu pass with
I assume, you wanted to mimic the behaviour of
collected_colors
and assuming constraints to defining shared memory, you did not initialize collected_semantic_feature in renderCUDA as :But I think you forget to write the collected features, as it is done for collected colors:
Am I right ?