Closed Zhiyuan624 closed 2 months ago
Hi @Zhiyuan624,
This part is indeed included in our codebase. For the novel view synthesis task, we choose every 10th frame for the test set. All you need to do is set the test_image_stride
parameter in the datasets configuration, for example in waymo/3cams.yaml
, set test_image_stride
to 10:
This way, the training will test and log the metrics for the test set images, and store them in a JSON file under each run's metrics/
folder.
Hi @ziyc ,
Thank you for your response! Your answer mainly focused on validation during training using the test_image_stride
, but it seems that the evaluation after training—eval.py
does not include the calculation process for the novel view synthesis metrics.
Hi @Zhiyuan624,
No, my answer is about evaluation, not validation. The novel view synthesis part is included in eval.py. Please follow the code here and you'll see how it works: https://github.com/ziyc/drivestudio/blob/388642a6a998833cb388e0f1a65dfc2071cc6a61/tools/eval.py#L37-L92
It appears that the evaluation currently focuses only on scene reconstruction indicators, but it lacks the novel view synthesis results as shown in Table 1 of the paper. I'd like to ask, is this part of the code missing? Thank you very much! @ziyc