ShijieZhou-UCLA / feature-3dgs

[CVPR 2024 Highlight] Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields
Other
326 stars 21 forks source link

Bug: collected_semantic_feature is not initialized #5

Closed LoickCh closed 6 months ago

LoickCh commented 6 months ago

Hey, Thank you for your great work. In rasterizer_impl.cu, we allocate memory to collected_semantic_feature.

    float *collected_semantic_feature;
    cudaMalloc((void **)&collected_semantic_feature, NUM_SEMANTIC_CHANNELS * BLOCK_SIZE * sizeof(float));

Then we directly get the value in the renderCUDA of backward.cu pass with

const float f = collected_semantic_feature[ch * BLOCK_SIZE + j];

I assume, you wanted to mimic the behaviour of collected_colors and assuming constraints to defining shared memory, you did not initialize collected_semantic_feature in renderCUDA as :

__shared__ float collected_colors[C * BLOCK_SIZE];

But I think you forget to write the collected features, as it is done for collected colors:

for (int i = 0; i < C; i++)
     collected_colors[i * BLOCK_SIZE + block.thread_rank()] = colors[coll_id * C + i];

Am I right ?

wb014 commented 6 months ago

I have the same question

yihua7 commented 6 months ago

It's not a bug. Indeed it doesn't matter at all even if collected_semantic_feature is dropped here.

The proposed method mainly focuses on distilling features to 3D GS rather than training the 3D GS with feature rendering loss. Therefore, there is no need to apply the gradient of feature loss to the alpha attributes of Gaussians, which are optimized by RGB rendering loss. The only potential usage of collected_semantic_feature is to record the feature value and compute backpropagation gradients to alpha but this may damage the reconstruction of 3D GS.

LoickCh commented 6 months ago

Okk, thank you for your answer. I just wanted to be sure it was done on purpose. Removing collected_semantic_feature, might help to clarify

ShijieZhou-UCLA commented 5 months ago

Yes, we don't allocate the shared memory for semantic_feature as they may cause the OOM if too high dimension. Instead, we allocate the global memory for it, which is why the implementation is different from the other variables.