Closed daniel347x closed 2 years ago
This looks great. Any chance you can help writing a small test for this to make sure it is indeed clamping the gradients correctly?
I'd be happy to if you can give just a hint or two about how/where it would go... It's a bit tricky for me to understand how to extract the logic in this case from the actual runtime location in the 'backward' function...
See new PR that is identical to this but uses a different branch from my fork:
This commit addresses a very intermittent, but deadly crash bug that is destroying my training runs - a very occasional infinite gradient in the 'backward' function.
In this commit, functionality remains unchanged by default.
However, an optional flag has been added that allows clamping the gradient in the 'backward' function. The flag takes the form of an int or sequence giving the min/max value.
An optional third value in the passed sequence is interpreted as a Boolean that indicates whether to print a warning to the console whenever an infinite gradient is clamped. The default is False.
Support for PyTorch only.