pytorch / ignite

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
https://pytorch-ignite.ai
BSD 3-Clause "New" or "Revised" License
4.53k stars 615 forks source link

Add the logging of dict metrics #3294

Open nowtryz opened 2 weeks ago

nowtryz commented 2 weeks ago

πŸš€ Feature

The request would be to add the logging of Mapping metrics in the logging framework.

The ignite.metrics.Metric class supports the use of Mapping metrics as we can see below. However, the BaseOutputHandler does not support dictionary metrics and warns about them.

https://github.com/pytorch/ignite/blob/edd5025e7d597a6e5fe45c5173487c37d3f9d1df/ignite/metrics/metric.py#L488-L494

Ones can simply ask the logger to report the metric names produced by the Metric directly as those will be store in the metric state no matter which name was used for the metric. But I feel like it breaks the kind of "namespaces" that seems to be used in loggers.

I would find it practical if the logger could handle mappings and log their content as sub values of the metric itself.

This could be achieved by editing the BaseOutputHandler which would fix this issue in any existing logger. There should not be any side effect as the logger was warning users if using mappings, so I imagine very few users were already having a mapping metrics logged that would now appear if they upgraded to a version with this feature.

vfdev-5 commented 2 weeks ago

@nowtryz thanks for the feature request. Can you please provide a code snippet with an example of what you would like to have ?

Ones can simply ask the logger to report the metric names produced by the Metric directly as those will be store in the metric state no matter which name was used for the metric.

There is a keyword "all" in OutputHandlers, e.g. TensorBoard: https://pytorch.org/ignite/generated/ignite.handlers.tensorboard_logger.html#ignite.handlers.tensorboard_logger.OutputHandler :

metric_names (Optional[List[str]]) – list of metric names to plot or a string β€œall” to plot all available metrics.

I would find it practical if the logger could handle mappings and log their content as sub values of the metric itself.

Yes, this makes sense.

So, if I understand correctly, you would like a use-case like this ?

evaluator.state.metrics = {
  "scalar_value": 123,
  "dict_value": {
    "a": 111,
    "b": 222,
  } 
}

handler = OutputHandler(
  tag="validation",
  metric_names="all",
)

handler(evaluator, tb_logger, event_name=Events.EPOCH_COMPLETED)
# Behind it would call 
# tb_logger.writer.add_scalar('"scalar_value", 123, global_step)
# tb_logger.writer.add_scalar('"dict_value/a", 111, global_step)
# tb_logger.writer.add_scalar('"dict_value/b", 222, global_step)
nowtryz commented 2 weeks ago

Hi @vfdev-5,

Yes exactly, the code snippet you provided is a good example. Another example would be the following:

evaluator.state.metrics = ... # kept unchanged
handler = OutputHandler(
  tag="validation",
  metric_names="dict_value",
)

handler(evaluator, tb_logger, event_name=Events.EPOCH_COMPLETED)
# Behind it would call
# tb_logger.writer.add_scalar('"dict_value/a", 111, global_step)
# tb_logger.writer.add_scalar('"dict_value/b", 222, global_step)