WU-CVGL / BAD-Gaussians

[ECCV 2024] "BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting". ⚡Train a scene from real-world blurry images in minutes!
https://lingzhezhao.github.io/BAD-Gaussians/
Apache License 2.0
137 stars 4 forks source link

images issue #14

Closed grisaiakaziki closed 1 month ago

grisaiakaziki commented 2 months ago

Nice work!

I have a question about why some images have different camera poses after ns-train. For example, in the case of blurfactory, it seems that the camera poses of 0001_input.png, 0001_gt.png, and 0001_estimate.png are not the same. How can this be explained? Another question is, how can I obtain clear images under the camera pose that captured the blurry image?

LingzheZhao commented 2 months ago

Hi, thank you for your interest in our work!

Our pipeline takes inaccurate initial camera pose estimates from COLMAP as input, and uses this initialization to optimize a tiny camera trajectory for each camera, in order to model motion blur caused by camera motion.

You can see many output images in the outputs/blurfactory/<yyyy-mm-dd_hhmmss>/<steps> folders for every steps_per_eval_all_images steps. In these images:

For the second question, it is not a single camera pose that corresponds to the blurred image capture, but the camera trajectory. If the start/middle/end clear images suit your needs, you can find them directly there. If you want to sample more camera poses on the trajectory and save all the images, you need to modify some code in the evaluation pipeline.

grisaiakaziki commented 2 months ago

Why can't I reproduce the results from the paper? Did you set the parameters? Moreover, the rendered videos look quite different compared to those on the project homepage.

LingzheZhao commented 2 months ago

Can you provide the "completely different" rendering you mentioned so we can further analysis it? Hyperparameters will have some impact, but the default settings should work good enough on the deblur-nerf dataset.

grisaiakaziki commented 2 months ago

https://github.com/WU-CVGL/BAD-Gaussians/assets/80758857/f39e3026-a49d-4a77-a2c0-e3f35ee08daf

grisaiakaziki commented 2 months ago

May be the solution is this?https://github.com/WU-CVGL/BAD-Gaussians/commit/25155be35f3c2dbdb13af1fb439e90d0523c98ba

grisaiakaziki commented 2 months ago

I'm really sorry, upon closer inspection, I realized that the dataset used in your project seems to be different from the one I'm using. I apologize for wasting your time.Thank you for patiently answering my questions.

LingzheZhao commented 2 months ago

From the video you've provided, it seems that the camera poses are being optimized (comparing to the complete blur results in https://github.com/WU-CVGL/BAD-Gaussians/issues/3#issuecomment-2016524355), but the scene, especially background is corrupted. I guess this is because the camera intrinsics you are using are probably not GT (our setup here) but are estimated by colmap, with blurry images only. From our early experiments, it seems that 3D-GS, as an explicit representation, is more sensitive to intrinsics than NeRFs. We didn’t cover joint optimization with camera intrinsics in this work (as in many areas such as robotics and machine vision, people usually calibrate their cameras first), maybe it can be added to our TODO list!