nex-mpi / nex-code

Code release for NeX: Real-time View Synthesis with Neural Basis Expansion
MIT License
595 stars 73 forks source link

couldn't get LLFF scene grade #55

Open that-liu opened 10 months ago

that-liu commented 10 months ago

I used python train.py -scene ${PATH_TO_SCENE} -model_dir ${MODEL_TO_SAVE_CHECKPOINT} -http-cv2resize in the flower scene of llff, But did not get the grade of the paper, using four rtx2080Ti, Measurement Result = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = name PSNR SSIM LPIPS images_IMG_2962.JPG 25.110090 0.873111 0.24526712 images_IMG_2970.JPG 26.380335 0.898152 0.20776317 images_IMG_2978.JPG 25.021733 0.861492 0.22302648 images_IMG_2986.JPG 28.067660 0.920080 0.17454995 images_IMG_2994.JPG 28.821845 0.926312 0.17679015

PSNR 26.680332 SSIM 0.895829 LPIPS 0.205479

pureexe commented 10 months ago

Last month, someone emailed me about this problem. I investigated this and confirmed that our code has a different score than the score reported in the paper.

In case you want to use NeX as a baseline comparison. In that case, you can report the score from either retraining, measuring directly from the provided result in the dataset directory, or taking directly from the paper, which is fine for me.

Here is the table showing how the score is drifting on Crest and Trex

Crest

Crest PSNR SSIM LPIPS
Paper report 21.23 0.757 0.162
Retrained 20.72 0.714 0.177

Trex

Trex PSNR SSIM LPIPS
Paper report 28.73 0.953 0.192
Retrained 28.69 0.952 0.192