ChikaYan / d2nerf

Apache License 2.0
185 stars 14 forks source link

Artifacts forming on the border of the rendered images #5

Closed Alexdruso closed 1 year ago

Alexdruso commented 1 year ago

Hi there, I am running your code on some custom videos of mine and for some some reason, when I render the images through the trained NeRF, I can notice some artifact mainly (but not exclusively) concentrated at the borders. Those artifacts are like random pixels not consistent with the color of the surrounding image and a general increase in blurriness. Let me know if you have encountered the same issues with your experiments and if there is a quick solution to that. One working theory is that those might be due to imperfect camera poses estimated with COLMAP, but in that case there would be no possible workaround I guess.

ChikaYan commented 1 year ago

Hi, may I have a look at the video you are running and the decoupled results?

The error in camera poses should be the primary cause, but meanwhile it could be because the dynamic object is not moving significantly enough that a part of the background is never visible. In that case the static NeRF could only try its best to guess and could very possibly produce blurry results with artifacts.

Alexdruso commented 1 year ago

Hi, unfortunately, I cannot share the dataset. However, the scene I am dealing with has a fixed camera with objects moving in the foreground. I have hijacked the camera poses estimated from COLMAP and fixed them for all the frames. I will reach back if that solves and thank you very much for your availability!

Since I actually have stereo pairs, may I ask if it would be enough to assign proper camera poses to make this project work with such a setup? It looks like all the other IDs (and hence embeddings) can stay pretty much the same for the two cameras for a given instant of time, but I might be wrong,

ChikaYan commented 1 year ago

Hi, unfortunately, I cannot share the dataset. However, the scene I am dealing with has a fixed camera with objects moving in the foreground. I have hijacked the camera poses estimated from COLMAP and fixed them for all the frames. I will reach back if that solves and thank you very much for your availability!

Np! Although if you only have a single fixed camera, I don't think COLMAP will be able to give a sensible estimation of camera parameters, moreover it would make little sense to use our method, which is based on NeRF and requires multi-view training. You may want to look into 2D motion segmentation methods such as Omnimatte in this case.

Since I actually have stereo pairs, may I ask if it would be enough to assign proper camera poses to make this project work with such a setup? It looks like all the other IDs (and hence embeddings) can stay pretty much the same for the two cameras for a given instant of time, but I might be wrong,

Yes, the method should work for more than one camera. Note that aside from properly allocating the time ids, you may also want to assign distinct camera ids and use camera embedding in the dynamic NeRF component (it was originally supported by HyperNeRF, you might need to change the code base a little bit to enable it) to boost the performance.