Open GopiRajuMatta opened 1 week ago
Hi, I also noticed that you asked the same question in BeNeRF. As mentioned before, multi-view consistency of NeRF/3DGS can do this job.
Although all images are blurry, they are captured at different locations and have different blur patterns, thus they can provide enough information to recover the 3D sharp scene (compensate each other images' blurriness) during training through our differentiable physical motion-blur imaging (rendering) model.
There would be some failure cases:
num_virtual_views
to synthesize the blur image. In this case, you may find some difference between the synthesized blur image and the input blur image after training. You can instead downscale the images to make it easier to train;Thank you @LingzheZhao, very helpful!
If time permits, I want to meet online and discuss few things with you, it will be really helpful! Please let me know.
Thank you Gopi
If time permits, I want to meet online and discuss few things with you, it will be really helpful! Please let me know.
Thank you Gopi
Hi @GopiRajuMatta
Sure, We can arrange an online meeting. https://github.com/WU-CVGL/BeNeRF/issues/4#issuecomment-2470192843
Hello @ethliup!
Thank you for nice work and many congratulations..
In this framework, what ensures that each image should be sharp and it's a deblurred version of input.. It might happen set of rendered images(which are blurry), when we average them can still ensure output will be as blurry as input..
Can you please clarify on this.
Thank you Gopi