Closed mahmoudEltaher closed 3 years ago
Hi,
Thanks for your interest in our work. Our model is unsupervised wrt the primitive parameters, namely we don't have primitive annotations, however we use supervision in the form of a watertight mesh from which we sample surface samples (points on the surface of the target object) and occupancy pairs (points inside and outside the bounding box that contains the target mesh accompanied by a label indicating whether their inside or outside). The gt_labels do not correspond to primitive annotations but to occupancy labels from the watertight mesh. For more details please check out our paper.
The occupancy loss is simply a classification loss between the predicted and the target occupancy labels. The variables inside_but_out and outside_but_in are simply used to compute the classification loss between points that the networks says that are internal whereas they are outside the target mesh or that the network says that are external whereas they are inside the mesh.
Best, Despoina
Thanks for your response.
But i have inquiry what is the value of predictions["phi_volume"] and what is the "gt_weights" what is means by the following line,i inside_but_out = (phi_volume.relu() * gt_weights)[gt_labels == 1].sum()
what means by ? phi_volume = -torch.logsumexp(-phi_volume, dim=2, keepdim=True)
hi @paschalidoud
this is kindly reminder
Hi Dears,
I tried to understand the conservative_implicit_surface_loss(which is responsible for calculating the occupancy loss)
I want to know why we use the gt_labels in the method,while the method is supposed to be un-supervised ,what is the ground truth for the points ?
what is the two terms "inside_but_out" and "outside_but_in" why we take the summation over the points of the target shape as it is explained in the paper,while it should be interest by the point of the primitive,since even if all points in the target shape,the problem is in the points that is in primitive but not in target shape.
Regards,