I have tried to train the lego scene(blender) with blender_512.gin file.
I render and eval following the included scripts, but I have questions about computing the final quantitative evaluation results.
First, There is no LPIPS metric in this code and similarly, the metric is also removed from jaxnerf code as well.
Is there any reason to remove the LPIPS metric in the code while it is still used to compare the performance of the NeRF-like projects in overall papers?
If I want to evaluate LPIPS performance, is it okay to use the LPIPS code or library as many people is using?
Second, when I conduct the evaluation using eval.py file, the psnr and ssim is computed per each image.
In evaluation, evaluation results is not logged as additional text file.
If I want to log them, should I just set eval_only_once as False?
Third, average score of the metrics are not computed or saved as representative results as performance on your code.
If I want to compare the average results of my trained model with the results of your paper, just compute the average score of them?
I appreciate to your awesome work!
I have tried to train the lego scene(blender) with
blender_512.gin
file.I render and eval following the included scripts, but I have questions about computing the final quantitative evaluation results.
First, There is no
LPIPS
metric in this code and similarly, the metric is also removed from jaxnerf code as well.Is there any reason to remove the
LPIPS
metric in the code while it is still used to compare the performance of the NeRF-like projects in overall papers?If I want to evaluate
LPIPS
performance, is it okay to use theLPIPS
code or library as many people is using?Second, when I conduct the evaluation using
eval.py
file, thepsnr
andssim
is computed per each image.In evaluation, evaluation results is not logged as additional text file.
If I want to log them, should I just set
eval_only_once
asFalse
?Third, average score of the metrics are not computed or saved as representative results as performance on your code.
If I want to compare the average results of my trained model with the results of your paper, just compute the average score of them?
Thank you for you attention!