Closed fungiboletus closed 4 years ago
Ok, I have to look info that one in more details. I suspect it might break some tests. Also, if I understand well, it means that those messages that are not re-enqueued will be lost?
Yes, they are lost but it should be fine as the state machine is stopped. I don't think it has worked before.
Well, it might not be a problem in your case, but in the general case, it means that the client things needs to be aware that some messages can be lost, and have to implement some failure handling to re-send those messages later on. Otherwise, the client thing may just be waiting for a response that will never come...
BTW, can you prepare a small example that replicates the issue?
Well, it does not seem that it breaks any test, so in that sense, it could be merged. But it might be that no test actually focus on your specific issue :-) (plus the fact that it might be hard to actually test).
I'll let @ffleurey decide on that one, based on the different comments.
thing Something includes TimerClientPort {
statechart init Start {
state Start {
transition -> Done
}
final state Done {
}
}
}
thing SomethingElse includes TimerClientPort {
statechart init Waiting {
state Waiting {
on entry do
timer!timer_start(42, 100)
end
transition -> Process event m:timer?timer_timeout guard m.id == 42
}
state Process {
transition -> Waiting
}
}
}
configuration SmallExample {
instance something:Something
instance somethingElse:SomethingElse
instance t:TimerJS
connector something.timer => t.timer
connector somethingElse.timer => timer
}
I havn't run this code but it should be enough to trigger the problem. something
will stop after reaching its final state. However it will continue to receive events from the timer, that are triggered by somethingElse
.
If you run a profiler with a debugger you will see that the javascript VM is very busy to delay messages for a statemachine that will never restart. You can also look at the CPU usage.
Ok :-)
A state machine with a final state has an infinite loop for every new incoming message. It's because the state machine is stopped, and therefore not ready, so the message is sent again at the next tick. If the state machine has a final state and uses timers, it will receive quite a lot of messages that will loop forever, using more and more cpu.