I don't know if this qualifies as an issue.
I am trying to optimize the parameters that define a mesh by minimizing the depth map loss.
Similar to this issue.
Basically:
$\underset{weights}{argmin} ||gt \textunderscore depth \textunderscore map - render \textunderscore depth(get \textunderscore mesh(weights))||$
The idea is that I can express a new mesh as some linear combination of PCA decomposition components that accurately describe my shape space (like an extremely simple version of SMPL) and compare it to a ground truth depth map. I note this because the mesh is linear in the parameters so there shouldn't be any issues with differentiation. For some reason, I can't figure out, the loss grows and grows. The optimization completely fails. Here is the relevant code:
I tried all sorts of things like reducing/increasing learning rate, changing optimizers, scaling weights. Nothing works, no matter what I do the loss increases.
After confirming that there arent any obvious bugs in the code, renderers configured correctly, etc and that my depth maps look normal. I ran two tests to check if this methodology can even work.
I made a similar (albeit not equivalent) problem where I made a function that receives (length, width, height) and uses them to define a pyramid. And I used this approach to find the paramters of the pyramid with whose depth map minimizes the loss. This worked like a charm.
The other thing I did was solve this using scipy's least_squares. I expected this to fail completely. Though it's not a perfect solution it surprisingly far outperforms the pytorch optimization above.
That's the reason I'm posting here. I've been through all the examples and I've read a whole lot of documentation. I can't shake the feeling that there is something I'm missing.
I don't know if this qualifies as an issue. I am trying to optimize the parameters that define a mesh by minimizing the depth map loss.
Similar to this issue.
Basically: $\underset{weights}{argmin} ||gt \textunderscore depth \textunderscore map - render \textunderscore depth(get \textunderscore mesh(weights))||$ The idea is that I can express a new mesh as some linear combination of PCA decomposition components that accurately describe my shape space (like an extremely simple version of SMPL) and compare it to a ground truth depth map. I note this because the mesh is linear in the parameters so there shouldn't be any issues with differentiation. For some reason, I can't figure out, the loss grows and grows. The optimization completely fails. Here is the relevant code:
I tried all sorts of things like reducing/increasing learning rate, changing optimizers, scaling weights. Nothing works, no matter what I do the loss increases.
After confirming that there arent any obvious bugs in the code, renderers configured correctly, etc and that my depth maps look normal. I ran two tests to check if this methodology can even work.
I made a similar (albeit not equivalent) problem where I made a function that receives
(length, width, height)
and uses them to define a pyramid. And I used this approach to find the paramters of the pyramid with whose depth map minimizes the loss. This worked like a charm.The other thing I did was solve this using scipy's least_squares. I expected this to fail completely. Though it's not a perfect solution it surprisingly far outperforms the pytorch optimization above.
That's the reason I'm posting here. I've been through all the examples and I've read a whole lot of documentation. I can't shake the feeling that there is something I'm missing.