Closed EoAelinr closed 5 years ago
I have never seen a closure used for PyTorch optimization, but this should nonetheless work. Are A.x
actually parameters? You should check the gradient flow.
Thank you for your answer. You are right, A.x wasn't a parameters, adding A.x.requiresgrad(True) in closure() allowed it to get optimized.
The features that I want to optimize are the xyz coordinates associated to each node (the graph is a mesh). I am although unsure on how to update those according to the optimized A.x. Do you think I could try to update A.edge_attr instead, or that it might produce unfeasible meshes ? (e.g. with triangles with edges lengths a,b,c where a > b + c)
I wouldn't try to optimize A
when working with meshes. You will get corrupted meshes for sure. You should simply try to optimize xyz
based on mesh connectivity.
I had missed that the gradient could flow up to it
Optimizing A.pos
worked.
Thank you.
❓ Questions & Help
Hi.
I am trying to optimize a graph (A) nodes according to the features of another (B) (the goal is to make the features correlate). Both A and B are loaded as Batch object with FAUST() and DataLoader() My network is as follow :
Its weights were obtained by training it on classifying FAUST poses. The loss computed in the Loss Module (which has access to precomputed B's features).
My loss is non-zero (e.g.
643.201843
), but A.x does not change at all. What might I be doing wrong ?My optimization loop is as follow :