Closed Liumouliu closed 1 year ago
Hi!
I'd guess that this is related to how you're transferring parameters to your CUDA device, but I unfortunately can't say anything for sure... the operations themselves (conversion, multiplication, sampling) should be continuous and straightforward to autodiff through, right? Can you check .is_leaf
on self.angleaxis
?
I'd also note that our JAX implementation is a bit different; for example we parameterize with quaternions + optimize many rotations (not just 1).
Thank you very much!
You are absolutely correct! This problem is solved.
Glad you figured it out!!
Hi,
Thank you for this exciting and simple paper.
I want to replace the original VM representation with the learnable VM representation.
If I understand correctly, all I need to do is first use the angleaxis parametrization
self.angleaxis = torch.nn.Parameter(torch.tensor([0,0,0], dtype=torch.float32)),
and then use the rotation matrix from axis_angle_to_matrix(self.angleaxis) to apply to the original input 3D points.
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Unfortunately, I find the rotation parameters cannot be optimized (Grad=None, with ValueError: can't optimize a non-leaf Tensor), Would you please give me some hint, why gradients can back-propagate to rotation parameters?
Thank you very much!