Closed tommycwh closed 3 years ago
You have to modify the following codes to change inputs to the OpenGL renderer. For training the color module, yes you can instead use per-vertex color, position, and surface normal. Unless your per-vertex color is too sparse, probably you don't need to run additional barycentric interpolation etc.
https://github.com/shunsukesaito/PIFu/blob/master/lib/renderer/gl/prt_render.py https://github.com/shunsukesaito/PIFu/blob/master/lib/renderer/gl/data/prt.vs https://github.com/shunsukesaito/PIFu/blob/master/lib/renderer/gl/data/prt.fs
Hi Shunsuke,
I am trying to use your codes with my dataset, which have no uv texture, but has vertex colors provided with the mesh. From some other issues, I see that you have suggested some others having similar datasets to modify your codes to read vertex colors as inputs. However, I have tried to study your codes for a while, but I do not have a clear idea about how to make the modifications. May I ask for some advice about how to load and render data when only vertex colors are provided?
In fact, I have a more specific question to ask, which is about the use of uv mapping in your codes. In lib/data/TrainDataset.py, I see the following codes in the function get_color_sampling(...):
I see that surface_colors are sampled from uv_render, and I tracked that uv_render (prepared in apps/render_data.py) is acutally colors on the uv mapping. Does this mean that, if I have my vertex colors, to prepare training data, instead of preparing surface_colors using uv mapping, I can prepare my own surface colors, e.g. using interpolation of vertex colors or using some other methods? Or would I find some of your other codes helpful in this situation?
Thank you very much for your amazing work, Shunsuke!