pytransitions / transitions

A lightweight, object-oriented finite state machine implementation in Python with many extensions
MIT License
5.68k stars 530 forks source link

Add transition queue for each model in AsyncMachine #490

Closed jekel closed 3 years ago

jekel commented 3 years ago

Main purpose for separate queue for each model is to make them executed in parallel, and not sequential for all triggers from all models common queue
Otherwise if some state in one models needs many time to be processed - all others will wait, and if that model triggers state to another - that trigger will be queued and can be executed more later, after all others triggers from another models execute thats isn't quite right Also, in case of exception inside transition - queue is cleared not for all machine, but only for model where exception was raised

coveralls commented 3 years ago

Coverage Status

Coverage increased (+0.002%) to 98.543% when pulling 937f192069cf34c48c31aa384f580079941a9f6d on jekel:async-transitions-queue into b398229dc0038f70884f11cb0d358a2b535e8acb on pytransitions:master.

aleneum commented 3 years ago

Hello @jekel,

an interesting and reasonable addition. Let's say we want to keep the current behaviour as the default behaviour but enable your workflow as well. As a developer, would you prefere another parameter such as AsyncMachine(queued=True, parallel_queue=True) or another value for queued such as AsyncMachine(queued='parallel'). I'd personally go for queued='parallel' because this would allow to change the default of queued=True to inidividual queues during a breaking major release of 0.9 or 0.10. Parallel might not be the best word, maybe queued='model' is more apropriate.

jekel commented 3 years ago

Hello I think you are right, that it would be better to make new behaviour as option queued='model', and later after it will be test by more people - make it default. I will make commit with changes

aleneum commented 3 years ago

Hi @jekel,

I will make commit with changes

I already prepared some adjustments and waited for your feedback. So no need for adaptions from your side. I think I will be able to merge the PR today and release a new version today or tomorrow. Again, thank you for your contribution. It's always nice when an idea/suggestion is submitted so well prepared and outlined!

jekel commented 3 years ago

@aleneum i have another idea - when remove_model is called on machine, we need also to clear all queues and stop processing all transitions from removed model - what do you think about it? but it could be done only when we have separate queue for each model..

aleneum commented 3 years ago

when remove_model is called on machine, we need also to clear all queues and stop processing all transitions from removed model

Good point. I'll see what can be done about it and will let you know.

aleneum commented 3 years ago

just a short update: i wrote some tests (which i didnt account for in my previous optimistic time estimation) for the 'model' queue mode and also added some documentation. removing queued events (from a single queue) is a bit more complicated than expected.I'll keep you updated.

jekel commented 3 years ago

Yeah.. its a good question how to clean queue.. I think, that the best way is to replace it with new - filtered by rule - model class is not the model to be removed

aleneum commented 3 years ago

The filtering itself is actually quite straight forward

    def remove_model(self, model):
        models = listify(model)

        for mod in models:
            self.models.remove(mod)
            if self.has_queue == 'model':
                del self._transition_queue[mod]
            elif self.has_queue:
                self._transition_queue[mod] = deque([e for e in self._transition_queue if e.args[0] != mod])

but the processing in Machine._process does not expect the queue to be changed.

aleneum commented 3 years ago

I merged your changes into feature-model-queues (see diff) but had to take another approach with a dict rather than a defaultdict. In #475, empty dictionary keys (with empty lists) caused quite some memory leak when models were constantly added and removed. This is why I think explicitly adding and removing dictionary keys with deque or self._transitions_queue is a better solution. Alternatively, one could use defaultdict but remove a key whenever a list is empty. That, however, has probably some considerable performance impact. Furthermore, I struggled a bit with writing useful tests for using and removing model queues.

What's your oppinion about dict instead of defaultdict? Could you think of more streamlined tests for test_model_queue and test_queued_remove?

aleneum commented 3 years ago

hmm... removing_model still does not work as intended. wont work with more than one model left in the list and also result in multiple event queues

jekel commented 3 years ago

So... first of all, if there is only one way to work with queues - from inside, i'm ok that list is created and destroyed from Machine explicitly -> dict is fine

another thing, in your code, you assume that remove_model is called from inside running transition,
here and here
so i think there will be exception when remove_model will be called when queue is empty.

like in my case -> model is removed from outside and there can be no running transitions

aleneum commented 3 years ago

so i think there will be exception when remove_model will be called when queue is empty.

right, need to add this to the tests

I implemented a _DictionaryMock which ignores keys and always returns self._transitions_queue. This way, handling of model queues and single queue can be kept similar.

aleneum commented 3 years ago

resolved in #492