Open colloroneluca opened 2 years ago
I have tested configs/custom/default_ubd_inward_facing.py
on 300 frames subsampled from my casually captured video. It should work.
Some general question:
imgs2poses.py
to generate the pose.llffhold=0
. Can you make sure the testing views is aligned with the training views solved by COLMAP?Hi @sunset1995, thanks for your reply.
Again, thank you very much.
Is it a forward-facing scene? If it is you should use spherify=False
(see this guide for more detail).
If it is not a forward-facining scene, I guess the problem is too few viewing angles. There are some techniques to reconstruct from fewer views but they are not supported by current codebase unfortunately. I suggest capture more than 100+ images covering all aspect of the upper semi-sphere of the object of interest.
Actually it is not a forward facing scene.
If it is not a forward-facining scene, I guess the problem is too few viewing angles.
Yes, I do think that too. Anyway, thanks a lot for the help!
Hi, I would like to congratulate for this wonderful work! Additionally I would also point out some discrepancies that I found between Train and Test results. I'm training an inward facing real world scene: the scene is composed by 61 images covering redundantly the depicted object. Generated images from training images camera poses have PSNR ≈ 26 while test images PSNR is ≈ 16.
Generated test images seem to be affected by big occluding clouds and distortions. I'm using a slightly modified version of the ./configs/custom/default_ubd_inward_facing.py configuration file which I paste below. Images are 1616x1080. Colmap's extimated camera poses and 3d point cloud seem to be accurate.
Can you suggest some parameters that I should tune differently in order to get better Test results?