Open amundra15 opened 1 year ago
Hi I no longer have the optimization codes. But the structure should be similar to nr_reg here, just change the optimization variable to NIMBLE parameters, and use per-vertex distance as loss.
The UV coordinates is embedded in the model file. You can refer to this function to see how I export the geometry with uv coordinates.
Thanks!
Hey As a follow-up, do you have the RGB-based fitting code? I have multi-view images, and I want to use one/multiple of them to fit NIMBLE. It would be great if you could provide the optimization code you used in the paper, as re-implementing it from scratch could lead to sub-optimal performance.
The RGB-based result in our paper is from a learning based method (I2L-MeshNet) not fitting. If you have one or multiple images, I suggest first get the joint positions and then regressing the parameters according to joint positions. The optimization code should be quite similar to nr-reg.
If I understand this correctly, this will only give you the geometry, right? How do you get the appearance?
You can use the photometric loss described in HTML. The process would be setting appearance parameter as an variable for optimization, then for each image, use a differentiable renderer (such as Pytorch3D) to compute photometric loss. You can set a fixed lighting condition for all views.
Hi,
Thanks for the amazing work. I want to try NIMBLE for my project. Do you have the optimization code for getting the NIMBLE parameters using the ground-truth mesh and texture map?
Also, can you provide the UV coordinates for your mesh? Right now I am using the MANO UV coordinates to generate my texture map, but that is not aligned with the NIMBLE-generated maps.
Best, Akshay