zju3dv / mlp_maps

Code for "Representing Volumetric Videos as Dynamic MLP Maps" CVPR 2023
Other
232 stars 10 forks source link

Gradients of sigma with respect to norm #5

Closed sillybirrrd closed 1 year ago

sillybirrrd commented 1 year ago

Hi,

Thanks for your great work! When I was trying to compute the gradient of $\sigma$ with respect to the normalized points, I met an error below:

RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

It seems like the two variables are independent, so did I make something wrong or there's no direct way to compute the gradient of $\sigma$ with respect to the normalized points in this respresentation?

Here's how I implemented it:

def calculate_density(self, wpts, batch, params=None, normalize=True, return_feat=False):
    if params is None:
        params = self.calculate_params(batch)

    if normalize:
        norm = self.to_norm(wpts, batch)
    else:
        norm = wpts

    xyz_feat = self.calculate_feature(norm, batch, params)  

    density_weight = conv_kn_layers.sample_grid_feature(
        norm=norm,
        trip_kn_features=params['trip_kn_density_params'],
        plane_slices=self.kn_plane_slices
    )

    sigma = torch.sum(density_weight * xyz_feat, dim=-1)

    # calculate gradients
    d_output = torch.ones_like(sigma, requires_grad=False, device=sigma.device)
    gradients = torch.autograd.grad(
        outputs=sigma,
        inputs=norm,
        grad_outputs=d_output,
        create_graph=True,
        retain_graph=True,
        only_inputs=True
        )[0]

I'd much appreciate it if you could offer me some advice!

sillybirrrd commented 1 year ago

I have set norm.requires_grad_() before this function.

pengsida commented 1 year ago

Thank you for your interest. It may be caused by the kilonerf_cuda library.