Closed shivamsaboo17 closed 5 years ago
you may treat it as that we have a rendering function R, where the input is the vertices P and the output is the predicted mask M, which is M = R(P). This function is differentiable, which means you can back propagate gradients from M to P, such that you could use accuracy loss to supervise P. For more details please refer to OpenDR.
We have predictions from model as new position of the vertices (x_new, y_new) for each node. In the accuracy loss do we create a segmentation mask (of 1s and 0s) from these predicted points and take L1 norm with ground truth mask? If so can you please explain in short how it is done and why this is not differentiable? Thanks.