Closed thedrow closed 3 years ago
Why is specialized library handling sensible for a recursive condition that is the result of explicit user coding? It is rather explicit that this is an infinite recursion unless I'm missing something really big about this code example.
Python shouldn't segfault but raise an exception instead and if it does, we need to mitigate it because we don't need the entire process to crash. If this happens in a different thread, it has a different stack.
Executing
from transitions import Machine
def foo(e):
e.machine.to_c()
m = Machine(states=['a', 'b', 'c'], initial='a', send_event=True)
m.finalize_event.append(foo)
m.to_b()
gives me a(n expected) RecursionError: maximum recursion depth exceeded while calling a Python object
. Callbacks passed to finalize
have no specific try/catch handling so far. But if your setting segfaults, I doubt that
try:
m.to_b()
except RecursionError:
pass
would make a difference.
Right now I don't see how such a recursion -- especially if its a bit less obvious than the presented case -- can be anticipated. It's is, however, possible to track the current depth of an Event
with a MixIn and hit the brakes when things get too deep:
from transitions import Machine, Event
def foo(e):
e.machine.to_c()
class RecursionTracking(Event):
limit = 100
def __init__(self, *args, **kwargs):
self.current_depth = 0
super().__init__(*args, **kwargs)
def _process(self, event_data):
if self.current_depth > self.limit:
print(f"Recursion limit of {self.limit} reached")
# raise RecursionError(f"Recursion limit of {self.limit} reached")
return False
self.current_depth += 1
res = super()._process(event_data)
self.current_depth -= 1
return res
class RecursionLimiter(Machine):
event_cls = RecursionTracking
m = RecursionLimiter(states=['a', 'b', 'c'], initial='a', send_event=True)
m.finalize_event.append(foo)
m.to_b()
Feel free to close this as 'won't fix' if that's the case.
Well, I'd gladly fix it (even though I am not hundred percent sure I know what 'it' actually is) if its considered a bug. Right now I don't understand (yet) how (and also why) a state machine library should prevent recursion errors but maybe you mean something different. As mentioned earlier, I cannot reproduce the segfault issue you faced but "only" an expected RecursionError
.
The issue I currently see is that the assumption that finalize callbacks are handled in a way that does not raise exceptions cannot be uphold. Consequently, finalize callbacks should also be guarded with a try/except statement.
What worried me is that Python crashes that way. If you can't reproduce the segfault it's probably not worth looking into.
I see. Well, if there is some leaking memory or other resource management issues this if of course an issue. But right now I would assume that the segfault is setting related and not particularly caused by transitions
. If you find evidence that this assumption is not correct, let me know.
1b3854c added try/except for finalize callbacks and thus silences raised exceptions there. From now on only the original error will be raised.
Hello @thedrow,
I am not sure if this is still relevant for you since the issue is rather old but I am curious if you somehow or somewhere rely on the fact that starting with transition 0.8.8, exceptions in finalize_event
are suppressed. I was considering removing try/catch from finalize_event
again to enable users to deal with issues instead of hunting them first but wanted to hear your experience first.
I haven't worked on Jumpstarter for a while but I think that was the problem or part of it. Generally, when you finalize, you should disallow transitions. That would fix this issue permanently and provide a meaningful error.
Naturally, this makes total sense but we need to find a way to raise an error much earlier so that the process wouldn't crash.