stevenygd / PointFlow

PointFlow : 3D Point Cloud Generation with Continuous Normalizing Flows
https://www.guandaoyang.com/PointFlow/
MIT License
720 stars 101 forks source link

An autograd hack? #27

Closed jatentaki closed 2 years ago

jatentaki commented 3 years ago

I'm trying to figure out the role of this line. It looks like some autograd hack similar to straight through estimators of form

logits = ...
smooth_probs = torch.sigmoid(logits)
hard_samples = (logits > 0).to(logits.dtype)

# values come from `hard_samples` but gradient from `smooth_probs`
binary_straight_through = hard_samples + smooth_probs - smooth_probs.detach()

but in this case it really seems to be a no-op. Could I ask for some explanation?

Thank you

stevenygd commented 3 years ago

Yes this is an autograd hack to make the computational graph sequential. We need this because at that time (pytorch 1.4 I believed) there was concurrency bug in pytorch - training for long enough can see a race condition on a C++ variable that stores information about enable/disable gradient flow.