Closed ymxie97 closed 3 months ago
The render settings we use for the evaluation is actually the same as Consistent4D. We provide the result of ours here if you need. stag4d.zip
Thanks for your reply! So the evaluated images are after optimization (stage 2) instead of stage 1's output. Another question is why the Consistent4D's LPIPS reported in Table 1 is different from the numbers reported in Consisten4D paper if the evaluation setting is the same?
The view setting is the same, but the frames are not. In our setting, we use 30 frames to run for evaluation, so we rerun the eval script. Consistent4D gets a better LPIPS in this turn, and we don't know the reason too (laugh). However, we just put the result we tested in this table. That's why Consistent4D's LPIPS is better than the original paper.
Thanks for your reply! Very appreciate it
Hi, Thanks for the great work!
The evaluation views on the dataset from Consistent4D should be different. Could you please provide the ground truth rendered images for evaluation?
Thanks!