Open pbsds opened 2 years ago
I have implemented a similar feature in my own fork of Rich (my own requirements are probably not general enough for direct inclusion into Rich). If anyone needs inspiration, cmckain@rich and take note of locals_to_ignore and clean_values.
How about?
locals_suppress=<callable>
locals_max_length=<optional int
locals_suppress=<callable>
locals_max_length=<optional int
rich.traceback.install
doesn't have these parameters,how do it?
Bumping this issue -- running into similar problems where locals
prints to egregiously large numpy
arrays.
Really like having the locals, but having to turn them off when working with arrays is not the best.
Have you checked the issues for a similar suggestions?
318 and #1378 is related
How would you improve Rich?
Showing the local variables in Rich tracebacks is incredibly helpful, but this does at times become too verbose prompting me to disable it. Especially when dealing with types such as
torch.nn.Module
, which feature very long__repr__
and__str__
outputs (example of a single frame featured below). My feature request is twofold: the ability to filter certain local variables entierly, and the ability to suppress/truncate the length of others.Variable names that start with a double underscore prefix (e.g.
__init__
) is a prime target for filtering. I imagine parametrizing such behavior inrich.traceback.install
with an optional lambda function (e.g.locals_filterer=lambda x: not x.startswith("__")
).For suppressing/truncating the length, one could either fallback to the default
object.__repr__
format, or simply clip all__repr__
() output ton
lines with trailing ellipsis (as dictionaries already seem to do). For deciding which locals to suppress, if not all, one could either provide a list of qualified class names (type(obj).__qualname__
) and types (kinda like the existingsuppress
argument for modules), or provide a lambda which determines if it should be supressed (e.g.locals_suppress=lambda x: any(isinstance(x, i) for i in list_of_types_to_suppress)
)Example output (note the `module = MyMachineLearningModel`):
``` │ /home/pbsds/ntnu/xxxxxxxxxxxxxxxxxxxxxxxxxx/yyyyyy/cli.py:332 in log_training_setup │ │ │ │ 329 │ │ if logger is not None: │ │ 330 │ │ │ print(f"{logger.__class__.__qualname__} hparams:") │ │ 331 │ │ │ print_dict_twocolumn(config.logging) │ │ ❱ 332 │ │ │ logging.log_config(logger, logger = {"_class": logger.__class__.__name__} | │ │ 333 │ │ │ │ │ 334 │ │ │ # host info │ │ 335 │ │ │ def cmd(cmd: Union[str, list[str]]) -> str: │ │ │ │ ╭───────────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────────╮ │ │ │ args = Namespace(mode='module', data_dir=What problem does it solve for you?
When running my program on a supercluster, which often feature a queue time of several hours, it sucks for it to immediately crash without me having set
show_locals=True
inrich.traceback.install
simply due to it being too verbose. Having it be enabled all the time, without it being too noisy, would greatly reduce development friction and put a smile on my face.