Closed Tiansong97 closed 1 year ago
Hi, this result is very strange, can you provide me with more details? For example, pytorch version, operating environment.
Besides that, I also doubt whether the pre-trained model is loaded normally. Can you make sure that this line of code runs successfully? https://github.com/zju3dv/ENeRF/blob/38c1b9087833926de897847636016b73f889d22b/lib/utils/net_utils.py#L443
Thanks for your reply. Yes the pretrained model is loaded successfully. To check this problem, we set "strict=True" and get the same results.
We run the code on an ubuntu server and Nvidia 3090 GPU. Pytorch version is the same as in readme, and other packages that may have influence are show as following:
All the evaluation outputs on DTU dataset are shown as following:
The rendered images seem to be reasonable: Scan114_32_0.png scan45_44_0.png
What's more, the evaluation results on nerf_llff_data (32 images for evaluation in total) and nerf_synthetic_data (32 images for evaluation in total) are also different from the psnr, ssim & lpips results that are shown on paper:
Also, the rendered images seem to be reasonable: chair_32_0.png (same as the image shown on the supplementary materials) fortress_25_0.png
We are writing a paper and prepare to cite your paper and compare with yours results, so we want to check this problem. Thanks.
While the model appears to perform well on the other two datasets, there may be an issue with the format of the DTU dataset when it comes to the artifacts. Based on the provided rendering images, which are clearly 512x640 in size, it is possible that the camera pose scale is incorrect. To confirm this, could you please review the content of $workspace/dtu/Cameras/train/00000000_cam.txt.
0.970263 0.00747983 0.241939 -191.02
-0.0147429 0.999493 0.0282234 3.28832
-0.241605 -0.030951 0.969881 22.5401
0.0 0.0 0.0 1.0
intrinsic
361.54125 0.0 82.900625
0.0 360.3975 66.383875
0.0 0.0 1.0
425.0 2.5
We follow the readme to download the dtu, llff, nerf dataset. The camera paramters are shown as follows. It seems that there is no difference to your data.
We download the code, pretrained model and dataset, then we run the rendering command directly after specifying the dataset path. We didn't modify the dataset or the code.
I'm sorry, it's my fault. I must have introduced a bug during my later update. However, I need to go to bed now and don't have time to locate where the bug is. A quick but temporary solution is to run "git checkout 2d6b3b2" and then execute the evaluation command.
I will address the issue tomorrow and update it on the master branch.
I have solved this bug.
I used the provided generalization model to perform evaluation on DTU dataset as in readme, and got the psnr, ssim and lpips values:
whereas in readme, the quantitative results should be
I wonder why the quantitative evaluation results are different? And I want to know your evaluation results? Thanks