I'm currently working on reconstructing the scene of the sedan dataset, which I downloaded from the Ref-NeRF project. My goal is to apply the seg2former architecture to perform segmentation of the car in the images and reconstruct only the car, excluding the rest of the scene.
I have experimented with setting the background as white, similar to synthetic datasets, as well as black. Interestingly, I have observed that using a white background results in significant noise, whereas a black background produces much better results.
I'm curious to understand the reason behind this discrepancy. Why does the white background work well with synthetic data but not with real datasets? Is there an explanation for this behavior?
I'm currently working on reconstructing the scene of the sedan dataset, which I downloaded from the Ref-NeRF project. My goal is to apply the seg2former architecture to perform segmentation of the car in the images and reconstruct only the car, excluding the rest of the scene.
I have experimented with setting the background as white, similar to synthetic datasets, as well as black. Interestingly, I have observed that using a white background results in significant noise, whereas a black background produces much better results.
I'm curious to understand the reason behind this discrepancy. Why does the white background work well with synthetic data but not with real datasets? Is there an explanation for this behavior?