minghanqin / LangSplat

Official implementation of the paper "LangSplat: 3D Language Gaussian Splatting" [CVPR2024 Highlight]
https://langsplat.github.io/
Other
675 stars 72 forks source link

Different feature shape? #23

Closed dennisushi closed 8 months ago

dennisushi commented 8 months ago

language_feature = torch.zeros((self._xyz.shape[0], 3), device="cuda") from here

Why are language features set to 3 dimensions? In the L1 loss computation then we have (3,H,W) predicted features and (512,H,W) precomputed GT features. Is this normal / supposed to be a tunable parameter as in Table 7 of the paper?

dennisushi commented 8 months ago

My mistake is I am passing -l language_features instead of -l language_features_dim3 , because those were not generated during autoencoder training. I didn't realize we have to use python test.py --dataset_path ../$DATASET_PATH --dataset_name $DATASET_NAME to downsample the dimensions to 3.