Closed herksaw closed 3 years ago
I guess it's related to how the mesh is colored, not the nerf performance. Currently it uses vertex color, but a more common and better way is to use texture, as it allows more colors within a triangle. It was suggested by one of the contributor in email, but I didn't have time to implement it. I believe it's the best way to increase mesh color quality.
I see, this explained it reasonably enough. So basically this would require us to have the mesh unwrapped to get its uv map, then use some sort of algorithms such as mosaic blending to fuse those images back to the map, right? Let me try to figure out this and see what I can do :D
So far I have tried to implement the algorithm proposed from this paper, while keep using the occlusion calculation based on nerf.
It worked, but the generated uv does not mapped itself correctly to the faces...
An implementation from original author existed, but looks like it suffered from a few problems according to its repo issues, and also it is no longer active. https://github.com/rafapages/SSMVtex
I have done several tweaks and refer over to the original calculations too, but still no lucks. Guessed I have no choice but to stop here, as the author not around there for help, and I didn't have enough free time to figure out all these things.
Also here is the python implementation, in case someone might found it helpful, the color_texture_atlas
function sure needs some speed boosts from vectorization or cython/c, as it takes hours when N_grid
gets larger.
texturer.zip
I think for now I will try to look for the existing implementations from 3d reconstruction repo. With some modifications, I believe we can use their texrecon algorithms to color mesh based on nerf without any issues.
Hi, thanks for your contribution! Hope other people find it helpful and can get some insights from your experiments!
A paper that is exactly what we want is just published: NeuTex: Neural Texture Mapping for Volumetric Neural Rendering It learns a texture mapping to the neural volume, so I guess it is able to generate texture map of the mesh (although it is not described in the paper).
In terms of the implementation, I also guess that it could be implemented as a straightforward extension based on this code. I would probably find time in the future to implement this. #60 is of higher priority now.
Closing the issue for now.
Hi, are there any ways to achieve better quality or higher resolutions for the coloring part? Here is the result from the tips found at here.
The predicted video from trained model turns out great as expected:
But it looks blurred when extracted as textured mesh:
I guessed it could be the issue of not enough training images, as the code used those images to color vertices. So, can we:
Or in fact could this totally unrelated to the input images? For example might be the accuracy error during the calculation of getting correct rgb values for projected pixel coordinates, etc. Please let me know more about this.
Dataset attached too :D