Open haowang020110 opened 1 month ago
Additionally, I tried force 0-6 and point 0-6, the videos all not look like the physdreamer's results compared to real capture on the project page( I alse use cam=2 to get real capture view) As mentioned on your paper, on fig6 the physdreamer and physgaussian used the same initial conditions, so I guess it may only need to modify the force params to simulate the scene looks like real capture? some videos are listed below
https://github.com/user-attachments/assets/d3e889f6-0fac-44a7-b649-f244b9285b7d
https://github.com/user-attachments/assets/d1c43b1c-9390-416e-8d8f-fd5ef8fbc5b0
https://github.com/user-attachments/assets/45803818-0052-4a76-bead-4cfd20074a66
https://github.com/user-attachments/assets/30a1162a-71d8-46ce-b240-66bd7b9006a3
Hi, from the videos, it seems that you are inference using the initialized (un-optimized) material fields?
Can you check if you loaded the optimized material fields, it could be downloaded from https://huggingface.co/datasets/YunjinZhang/PhysDreamer/tree/main
After loading the material fields, you just need to apply the force and run simulation.
Hi, Sorry for the misunderstanding . I used the optimized material fields, my problem is how to set force direction and duration to simulate the object moving like the real capture video reported on the project page.(I try the force params in the config, they all not look like the force in real capture)
Hi, author I am wondering how to inference video looks like real caputre using your released model. I run the run.sh force_id=0, point_id=0 on alocasia scene, but get the video below. This video looks different from real caputre on the project page.
https://github.com/user-attachments/assets/bcb35370-72c2-4423-b63e-a75cb110018f