city-super / Scaffold-GS

[CVPR 2024 Highlight] Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
https://city-super.github.io/scaffold-gs
Other
708 stars 59 forks source link

How did you configure your test set? #30

Open Lee-JaeWon opened 5 months ago

Lee-JaeWon commented 5 months ago

Thanks for a great paper.

I was wondering, how did you evaluate the numbers like PSNR, SSIM, etc. in your paper?

My question is how many test cases did you pull out of the total number of datasets to evaluate the numbers.

I ask because it doesn't seem to be directly mentioned in the paper.

inspirelt commented 5 months ago

Thanks. We follow the common configurations for those without an official test split: select 1 frame from every 8 frames. For BungeeNeRF, we choose the first 30 frames as test set. Details in https://github.com/city-super/Scaffold-GS/blob/da97ef8257b46d51c432df0df8b62f7c3a3c1079/scene/dataset_readers.py#L165-L178.

Torment123 commented 5 months ago

Hi, I have a followup question for this: I see that the appearance embedding is constructed based on the number of views in train cameras, and when shifting to eval mode, the uid of the test camera is directly used to query the learned embedding.

If I understand the appearance embedding correctly, it is set up so that the view-dependent effect can be better encoded; but since the test cameras and train cameras have different views, and so their uids have different meaning in this aspect, I think querying the same learned embedding would lead to wrong effect ? Thanks