It is likely users are switching back and forth between different functional units and impact categories when looking at the Sankey. As calculation takes time, we should cache the results (for combo of specific FU, IC, scenario, cutoff and iterations) and check if results are in cache before calculating them.
The results are only a small JSON every time, so the cache shouldn't take much space, while this can make AB feel significantly faster in use.
In the future we may look at the actual graph traversal calculations and see if we can already provide it with previously cached results to calculate less, but this would be much more complex. e.g. when a calculation is run where the cutoff is lowered to 3% from 5%, so we can take the results from the 5% and then calculate to the lower cutoff from there.
Feature request
It is likely users are switching back and forth between different functional units and impact categories when looking at the Sankey. As calculation takes time, we should cache the results (for combo of specific FU, IC, scenario, cutoff and iterations) and check if results are in cache before calculating them.
The results are only a small JSON every time, so the cache shouldn't take much space, while this can make AB feel significantly faster in use.
See below for a similar implementation of caching in #1046. https://github.com/LCA-ActivityBrowser/activity-browser/blob/6359dd76a6128aeb31698628adf3622ce6f4e371/activity_browser/layouts/tabs/LCA_results_tabs.py#L1042
In the future we may look at the actual graph traversal calculations and see if we can already provide it with previously cached results to calculate less, but this would be much more complex. e.g. when a calculation is run where the cutoff is lowered to 3% from 5%, so we can take the results from the 5% and then calculate to the lower cutoff from there.