vye16 / shape-of-motion

MIT License
741 stars 47 forks source link

Test in Nvidia-Dynamic Scenes #33

Open kcheng1021 opened 1 month ago

kcheng1021 commented 1 month ago

Hi, Thanks for your wonderful work and the release of the codes! I have tried it in the Nvidia datasets in the monocular setting followed DynIBar. However, the results seem blurring and nosiy. Here is a sample image.

微信图片_20240807143437

I am using the custom dataset to train the nivida dataset without any change. I am curious why the static backgroud is fuzzy, as your demos are pretty clear. Can you help me to find out the reason? Thanks very much!

qianqianwang68 commented 2 weeks ago

Hi if the background is blurry I think the most likely reason is the camera parameters, as the background modeling is normal 3DGS. One way to verify this is to use the GT camera poses in the Nvidia datasets and see if the issue persists. Indeed we found our camera estimation to be one of the most limiting factors of the whole pipeline. We hope to improve the camera estimation for in-the-wild videos in the future.

CTouch commented 6 days ago

One way to verify this is to use the GT camera poses in the Nvidia datasets and see if the issue persists.

Thanks for your great work!!! @qianqianwang68 But I have the same question. And I also found that the dynamic things(like the balloon) move unsmoothly.

https://github.com/user-attachments/assets/81b84453-1851-4f9c-9b53-44cf061e8b56

I tried simply to replace the traj_c2w in droid_recon/Balloon1.npy with the GT camera pose. But it seems a wrong attempt. Could you please explain the way to use the GT camera poses?