Open Zhao-Yian opened 3 weeks ago
Hi, the evaluation script is incompatible with this GUI version, which has gone through code refactoring and abandoned many interface for evaluation.
The evaluation mainly involves loading different datasets like SPIN and NVOS, while the metric (IoU) calculation is relatively easy. We may re-write it in the future.
Thanks!
Sorry to ask again, which reference views and target views are used for evaluation on the SPIn-NeRF dataset? What was the basis for the selection?
Hi, the reference view is set to the first frame of the sorted views. However, the method is robust to reference view selection since the segmentation target is relatively simple.
Thank you very much for your answer. May I ask if the target views used for evaluation are all views except the first frame? And the IoU of each scene is the average IoU of these target views?
No, the target views include the reference view generally. Since though the reference view has gt mask for reference, the segmentation cannot ensure the final result align with the initial 2D mask. It is still meaningful to check whether the reference view is segmented properly.
The IoU score is calculated across all views, not (IoU_1+...+IoU_N) / N. There is a little difference since (a/b+c/d)/2 != (a+c) / (b+d)
. We use the latter implementation.
Thank you very much for your answer!
Can evaluation scripts be provided on different datasets to validate the quantitative results provided in the paper?