Closed monniert closed 3 years ago
Hi, this is mostly because of the dataset. When we evaluated the scores, we render the shapnet with slightly different settings(camera view, light, etc) compared to N3MR settings. We run n3mr, softras and dibr on our own datasets and reported the scores evaluated on the same dataset. This leads to the difference.
We also run dibr on nmr dataset, where we downloaded from https://github.com/autonomousvision/differentiable_volumetric_rendering. and report the score in https://nv-tlabs.github.io/DefTet/.
Thanks for the detailed answer!
3D iou is missing. I only have chamfer and F-scores and we report chamfer in deftet. If you want Fscore I can share it with you. As for the dataset, since it is done in NVIDIA, due to NVIDIA policy, I cannot release anything. Sorry about that.
That makes sense. Sure any additional metrics would help, I think I will quantitatively evaluate dibr with default hyper parameters on nmr dataset anyway and matching the F-scores will be a good start
Hi there, thanks for releasing the code for this awesome project! Quick question regarding the evaluation you used:
when I compare the 3D IoU results in your paper with the ones reported by previous SOTA DR algorithms, they are a bit different (e.g. SoftRas reported 62% whereas it is 59% in your paper). Do you know where the differences come from? I suppose you obtained such results by running their algo with your evaluation but it seems quite similar to me
Thanks