Closed floriangc closed 3 years ago
Going deeper the problem seems related to the demand augmentation and the update of the structures at each time step.
By the structure you mean the buffer_string attribute of the Simulator ? Or the whole update of the vehicles data ?
The update of vehicles' data. There's an internal loop that spreads updates to overall subscribers (parsing data from xml trace to a dict and then to the object), it's via a for loop within the publisher. I'm checking on why the performance goes from 1 ms to 5ms, this might be due to the buffer size (In exploration)
When updating the buffer there is a dispatch to objects (called subscribers to update internal object data). E.g vehicle's data, etc. This is done along multiple channels (as in a pub-sub pattern), and up to the moment, there's only one channel to dispatch information.
Afterwards, each one of the subscribers is updated via this loop, this loop scales bad with number of vehicles since the number of vehicles is the number of objects to run along the for.
Maybe to speed up things a little, we can consider using comprehension list for those loops ?
But there is only one channel here .. My bad, it will not improve this case.
Parsing around 340 vehicles to a dict is not costly but it can go up to 2 ms.
from xmltodict import parse
%timeit parse(s.request.query)
1.86 ms ± 89.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Parsing around 340 vehicles to a dict is not costly but it can go up to 2 ms.
from xmltodict import parse %timeit parse(s.request.query) 1.86 ms ± 89.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The weird thing is the exponential grow in time of your first plot .. The parsing of the string into a dict should have a linear increase in time. So It's maybe not the problem.
Actually parse
, does more complex things than linear parsing because creates embeddings, (dicts of dicts) when the xml structure allows for it. I used the library since it reduced the effort on this transformation in terms of code but we could try a custom solution that follows the linear time increase.
On the other hand, there are still 2~ms/veh processing to search they could be related to memory handling.
On the other hand, there are still 2~ms/veh processing to search they could be related to memory handling.
Copy in memory of the string returning by symuvia ?
Now performance has this behaviour: For reproduction please check https://github.com/licit-lab/symupy-examples
Description
When running a SymuVia simulation with SymuPy the time to perform a
next_step
increases with the number of step.Reproduce
Use the configuration file: https://github.com/licit-lab/symupy-examples/blob/main/network/SingleLink_symuvia_withcapacityrestriction.xml
And run the simulation with the code:
After tracking down the problem, it seems that this line take more and more time during the simulation.
The issue might be related with the size of the buffer that SymuPy get from SymuVia.