Open Tandon-A opened 1 year ago
I'm pretty sure the gradients are still propagated through the int. Notice before converting to an int, the z-values are multiplied by 1000 so that the z-axis resolution--the one that we care about for gradient propagation -- is less than 1mm. The x and y axes turn to a scale of "1" int increments = "1" taxel up/down or left/right.
The reason they are converted to ints is so that I can use the torch.unique
function to sort the vertices according to their position on the x, y across the surface of the bed. This sorting methods also sorts them according to which is the "lowest" or "highest" position in the z space so that in the end you are left with an 27x64 array of all the "lowest" or "highest" z values. And everything that doesn't have a z value (e.g. where the mesh isn't above a particular pressure taxel) is set to 0. Note that there are some areas in the middle of the mesh where the triangles are rather large (e.g. > 1" on a large SMPL body) and there may be a "0" hole in the middle of where the body is. L610-L632 take care of that by filling in the holes based on what is nearby.
You should be able to check by running a couple epochs as the README suggests and zeroing all of the losses except for the PMR loss. as long as the loss function trends down (and it should) then it's propagating gradients.
-Henry
I assume you want to get this method working with the larger pressure array size (i.e. 33x68)? if you get stuck on this let me know and I'll see if I can fix it. I don't want my messy code to block you.
-Henry
Hi Henry,
I'll try to run the network on just PMR loss. But I was testing on an example of type casting (added script below) but the code breaks in doing the backprop.
import torch
x = torch.rand(2, 3) * 10 + 1
x.requires_grad = True
print (x, x.requires_grad, x.grad)
x_int = x.type(torch.LongTensor)
print (x_int, x_int.requires_grad)
gt = torch.ones((2, 3))
criterion = torch.nn.L1Loss()
loss = criterion(x_int, gt)
print (loss)
loss.backward()
print (x.grad)
It produces this error:
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Do let me know your thoughts on this.
Hello Henry,
Thank you for assisting in understanding the paper better.
I am a bit confused on how pressure loss is being applied on Mod2 of the BPWNet model.
After going through the paper, I understand the loss flow as follows:
But in checking the code for PMR, you do an int conversion on the verts_taxel part (L555 - mesh_depth_lib), which is non differentiable.
Since the output of PMR module depends on verts_taxels_int which does not have gradient, how are you sending gradient back to verts_taxel and then the model?
Best, Abhishek