Open sdesrozis opened 3 years ago
We could also think of different levels of debugging:
Hi @sdesrozis and @vfdev-5, I think this is a great idea and much needed tool. I am interested in implementing it.
I agree with @vfdev-5's idea of having it in levels. We can pass the level in engine.run
. So level 1 could be dataflow/workflow debugging, that means running the engine and logging all the events and handlers.
Something like
ITERATION STARTED
Input Shape -
Handler 1 called
....
ITERATION COMPLETED
Handler 2 called
...
Doing this is start forward, we just need to set the logger level in engine.run
The level 2 would be logging information during training like loss, LR, scheduler. For this we need to make changes to the training_step
the user provides, I am not sure how we can go about that.
Then level 3 would be gradients. (can use PyTorch hooks)
What do you think?
@Ishan-Kumar2 I totally agree. Doing it by level is a smart way to provide a good API to users and schedule the team's work.
Yesterday, we discussed with @Priyansi another debug feature which could be great. We thought we would provide a visual description of the workflow. It would be generated statically by the engine, similar to what could be considered at the first level. We could generate a dot file to describe the links between the handlers through what is added to the state.
@Ishan-Kumar2 I will be busy very soon because of a change in job. I can't contribute the way I would like to. If you are interested to contribute on this feature, I would be happy to discuss this with you, and help in any way I can.
Hello, @sdesrozis @vfdev-5. Is this feature request still on the cards? I would love to discuss further and contribute.
@DevPranjal yes, it is still open issue that we would like to have.
Great, so if I can compile the previous discussions, we need level wise debugging modes:
@vfdev-5 The debug level API will be accessed through engine.run(...)
?
I was thinking about engine.debug(enabled=True, level=level)
and engine.debug(enabled=False)
.
Ok. Can we just have default level = 0
instead of the enabled
arg? Also, are we expecting the intermediate output same as mentioned previously in the comment?
ITERATION STARTED
Input Shape -
Handler 1 called
....
ITERATION COMPLETED
Handler 2 called
...
Yes, you are right maybe a binary flag would be better option:
# to enable
engine.debug(level=Engine.DEBUG_EVENTS | Engine.DEBUG_OUTPUT | Engine.DEBUG_GRADS)
# to disable
engine.debug(level=0)
Provided flag names are just examples and can be renamed if a better naming can be found
Sure.
For a start, I tried the following, within the Engine
class:
def debug(self, level: int = 0):
self.debug_level = level
def _print_debug_mode(self, string: str, level: int):
if level > self.debug_level:
print(string)
def _fire_event(...):
...
self._print_debug_mode(f'firing handler: {func.__name__}', level=1)
...
@DevPranjal there is also self.logger
https://github.com/pytorch/ignite/blob/3a286b1d13a4a0476b3ee8e8046f16465818c9f6/ignite/engine/engine.py#L125
Yes.
Can I change the approach to have custom log levels, using logging.addLevelName
for all the three levels?
or do I simply change the above to:
def _log_debug_mode(self, string: str, level: int):
if level > self.debug_level:
self.logger.debug(string)
or anything else?
Does @puhuk 's PR solve this issue, or is this open yet?
Hi, @sawradip Iām working on it :)
@puhuk @sawradip the fact that someone is working on this issue does not mean that produced PR will be certainly merged. It can happen that contributor stops working on the PR and PR is suspended or PR's code does not meet requirement or has poor quality... Anyway, @sawradip you can also try to propose your version of solution. Otherwise pick another issue from the list.
Also, you can both collaborate together and send 2 PR to solve the issue
š Feature
The idea is to provide a debug mode as discussed here: https://github.com/pytorch/ignite/issues/1989#issuecomment-833643158.
All the possible options can be discussed here. We could add an helper to track the workflow of handlers from the engine run. On the other hand, some specific debug tools are needed to track the values of PyTorch objects (LR, loss, grads, etc.). Note that the preferred way to extend ignite is handlers' design so let's create a new set of tools for the debugging.