brentyi / tilted

Canonical Factors for Hybrid Neural Fields @ ICCV 2023
101 stars 2 forks source link

non-differentiable #2

Closed Liumouliu closed 1 year ago

Liumouliu commented 1 year ago

Hi,

Thank you for this exciting and simple paper.

I want to replace the original VM representation with the learnable VM representation.

If I understand correctly, all I need to do is first use the angleaxis parametrization

self.angleaxis = torch.nn.Parameter(torch.tensor([0,0,0], dtype=torch.float32)),

and then use the rotation matrix from axis_angle_to_matrix(self.angleaxis) to apply to the original input 3D points.

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

Unfortunately, I find the rotation parameters cannot be optimized (Grad=None, with ValueError: can't optimize a non-leaf Tensor), Would you please give me some hint, why gradients can back-propagate to rotation parameters?

Thank you very much!

brentyi commented 1 year ago

Hi!

I'd guess that this is related to how you're transferring parameters to your CUDA device, but I unfortunately can't say anything for sure... the operations themselves (conversion, multiplication, sampling) should be continuous and straightforward to autodiff through, right? Can you check .is_leaf on self.angleaxis?

I'd also note that our JAX implementation is a bit different; for example we parameterize with quaternions + optimize many rotations (not just 1).

Liumouliu commented 1 year ago

Thank you very much!

You are absolutely correct! This problem is solved.

brentyi commented 1 year ago

Glad you figured it out!!