Drexubery / ViewCrafter

Official implementation of "ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis"
Apache License 2.0
881 stars 32 forks source link

Quantitative evaluation of Sparse NVS task #32

Open zhanghaoyu816 opened 3 weeks ago

zhanghaoyu816 commented 3 weeks ago

Thanks for your wonderful project !

I'm glad to see that you just open-sourced the code for Sparse NVS task, but I'm curious if you can provide the code of quantitative evaluation, i.e., evaluating the PSNR, SSIM, and LPIPS of the rendered image on specific test views (like table 2 in the paper). Should I align the point cloud of dust3r with the known camera poses of training views in sparse NVS task?

Drexubery commented 2 weeks ago

Thanks for your interest in our work!

We will release the 3DGS optimization and the corresponding evaluation code after the CVPR deadline.

fafancier commented 5 days ago

@Drexubery hi, I try to use images from diffusion video to train 3dgs with single view input mode and sparse view input. I found this naive way generate blur and floating point. could you let me know the 3dgs better result will be like of viewcarfter video ? thanks !