Closed wyf0912 closed 5 months ago
Hi,
The training process is naturally not a 100 percent stable process, and it is normal to meet some minor probabilities the model gets collapsed. Maybe you can re-run the code to reproduce the paper's result.
Below I put our output log of rome
under lmbda=0.004.
Please omit the time-related data, it is not accurate, as we used the machine for multiple tasks one time when running that code. :)
2024-02-21 20:20:28,562 - INFO:
Estimated sizes in MB: anchor 2.7709, feat 9.1637, scaling 2.597, offsets 3.9514, hash 0.1356, masks 0.5349, MLPs 0.1573, Total 19.3108
2024-02-21 20:21:39,025 - INFO:
Encoded sizes in MB: anchor 2.7709, feat 9.1631, scaling 2.5868, offsets 3.9502, hash 0.1356, masks 0.5349, MLPs 0.1573, Total 19.2988, EncTime 70.4578
2024-02-21 20:22:42,365 - INFO:
DecTime 63.3345
2024-02-21 20:23:26,714 - INFO:
[ITER 30000] Evaluating test: L1 0.03321390665106235 PSNR 25.69104859136766 ssim 0.8452245631525593 lpips 0.24325476442613908
2024-02-21 20:23:26,715 - INFO: Test FPS: 213.23462
2024-02-21 20:23:34,101 - INFO:
[ITER 30000] Evaluating train: L1 0.021867336705327034 PSNR 29.134645080566408 ssim 0.91586092710495 lpips 0.14810177385807038
2024-02-21 20:23:34,101 - INFO: Test FPS: 137.90701
2024-02-21 20:23:34,116 - INFO:
[ITER 30000] Saving Gaussians
2024-02-21 20:23:40,352 - INFO:
Total Training time: 2558.292266368866
2024-02-21 20:23:40,355 - INFO:
Training complete.
2024-02-21 20:23:40,355 - INFO:
Starting Rendering~
2024-02-21 20:24:48,630 - INFO: Test FPS: [1;35m235.08595[0m
2024-02-21 20:24:48,630 - INFO:
Rendering complete.
2024-02-21 20:24:48,631 - INFO:
Starting evaluation...
2024-02-21 20:25:30,528 - INFO: model_paths: [1;35moutputs_evaluation/bungeenerf/rome/0.004/HAC_13_15_4[0m
2024-02-21 20:25:30,591 - INFO: SSIM : [1;35m 0.8448924[0m
2024-02-21 20:25:30,592 - INFO: PSNR : [1;35m 25.6844559[0m
2024-02-21 20:25:30,592 - INFO: LPIPS: [1;35m 0.2427085[0m
2024-02-21 20:25:30,595 - INFO:
Evaluating complete.
Thanks for your prompt reply.
I will have a new try. By the way, could you attach the complete log file so that I can have a reference for training loss?
Thanks for your reply! It seems related to the CUDA version.
Thanks for your great work!
Did you meet the problem that the testing accuracy is much lower than the training one? My reproduced PSNR is 21.35 rather than 25.98 in the paper. The log is attached.
Looking for your reply.