when debugging torch shenanigans i tend to rewrite the same helper functions/debug hacks over and over and over again. wouldn't it be nice to make a library that does this for us?
see pytorch#43144, https://github.com/zasdfgbnm/TorchSnooper
primary wants:
ability to control printing different things with booleans, like tikitrace.trace(loads=True, tensors=True, bleh=False)
init: context manager
init: function decorator
init: global
trace torch.load
trace tensor instantiation
auto trace large memory allocations (custom threshold)
memory usage dump
compare load time to total start->end time of tracing--find the bottleneck
trace individual tensor
trace tensor as part of a group (eg in function call, tikitrace.trace(tensor, group="QuantLinear")
when debugging torch shenanigans i tend to rewrite the same helper functions/debug hacks over and over and over again. wouldn't it be nice to make a library that does this for us? see
pytorch#43144
,https://github.com/zasdfgbnm/TorchSnooper
primary wants:
tikitrace.trace(loads=True, tensors=True, bleh=False)
tikitrace.trace(tensor, group="QuantLinear")