Open Bardo-Konrad opened 3 weeks ago
What is the problem there?
I expected fine details, not the typically coarse output of photogrammetry
Did you process everything correctly? or just the result is not satisfied.
This was my approach
call activate surfel_splatting
python.exe C:\2d-gaussian-splatting\convert.py -s .
python.exe C:\2d-gaussian-splatting\train.py -s .
python.exe C:\2d-gaussian-splatting\render.py -m <path to output checkpoint folder> -s .
The be all and end all of GS/Nerf is perspective dependent reflections and very fine details. Given that you cannot put pdr in meshes, I don't expect that, but the fine details are utterly missing. Instead of this I can just use reality capture.
Did the rendering results look normal? Can you showcase your dataset. Let's figure out some failure cases for facilitating future works.
Sure: images.zip And renders: renders.zip renders2.zip
I think the images are textureless, sparse-viewed, blurred and low-resolution. This will definitely pose challenge for NeRF/GS based solutions. Maybe reality capture is a more robust choice in these cases.
Is the goal of this r. fine details in meshes compared to standard photogrammetry?
The goal is geometrically accurate radiance fields.
I understood that this was to produce high quality meshes from nerfs/gaussian splatting.
I am underwhelmed by the results. I expected fine details, not the typically coarse output of photogrammetry and that's even worse.