LCA-ActivityBrowser / activity-browser

GUI for brightway2
GNU Lesser General Public License v3.0
152 stars 58 forks source link

bug in EF and contribution analyses screens when switching between relative and absolute views #362

Closed marc-vdm closed 4 years ago

marc-vdm commented 4 years ago

There's a problem with switching between relative and absolute displays in the EF and contribution analysis tabs. It takes my laptop about 15-30 minutes (of 100% single core) to switch between the two modes and RAM increases about 2.5-4 GB in the process. AB completely freezes during the waiting period and RAM remains in use after it is done.

This happens when using four processes (from EI3.6) and five assessment methods.

dgdekoning commented 4 years ago

Which version of the activity browser are you using, and are you getting any warnings or exceptions in the terminal/debug window?

The switch between relative and absolute results should definitely not be taking that long (or eating such an incredible amount of memory). The underlying code which creates the table and plot makes use of numpy ndarrays and pandas DataFrames and extracts N of the most influential exchanges from the already-calculated inventories for each functional unit (or method).

marc-vdm commented 4 years ago

This happens on both 2.4 and 2.5 (pulled just now) for me

However, I just checked, and it only happens in one of my projects, not in the others, any ideas on what could cause this?

Terminal output is nominal:

-hiding tab: History

DevTools listening on ws://127.0.0.1:3999/devtools/browser/21556a28-7d1e-4c03-8e9d-5f813b770793
-hiding tab: Activity Details
-hiding tab: Characterization Factors
-hiding tab: Graph Explorer
-hiding tab: LCA results
Reset metadata.
Reset project settings directory to: PATH
Loading user settings:
Reset metadata.
Reset project settings directory to: PATH
Loaded project: thesis
Brightway2 data directory: PATH
Brightway2 active project: thesis
Qt Version: 5.12.5
Remote debugging server started successfully. Try pointing a Chromium-based browser to http://127.0.0.1:3999
Current shape and databases in the MetaDataStore: (0, 0) set()
Adding: ei36
Adding: biosphere3
Drawing graph, i.e. loading the view.
+showing tab: LCA results
dgdekoning commented 4 years ago

Currently, absolutely no idea. Attempted several ways of replicating the problem but apart from mildly stressing the CPU with the repeated switching I can't replicate the bug.

Will think on it tonight.

bsteubing commented 4 years ago

might also be something in the dependencies (e.g. numpy, matplotlib...)?

marc-vdm commented 4 years ago

@bsteubing, considering that other projects work fine, I`m not sure thats the case.

I'll also try to investigate, if I find anything, I'll share it here. I might sprinkle some print statements throughout the code to see what happens.

dgdekoning commented 4 years ago

With assistance from @e4BdSBmUzHowFico5Ktn I have replicated the issue and it seems to occur only in very special cases:

calculation setup contains:

The actual issue occurs for both EF and process contributions when comparing the functional units AND ONLY when looking at the absolute values, for relative values, changing the cutoff, impact category, aggregation, etc. does not have a major impact.

Maybe this has to do with the conversion of a large amount of 'nan' or empty values from numpy into pandas?

dgdekoning commented 4 years ago

Further consideration suggests that the exact issue is caused by a series of 0's or nan's being found when attempting to select the 'top X' values, which causes the entire table of EF or process contributions to be returned, in turn massively slowing down all the following methods as the AB is now labeling & plotting thousands of values vs. the usual handful.

marc-vdm commented 4 years ago

I`ve found the solution!

However, the fix is in contribution.py of bw2analyzer (in BW2, not AB, or at least, this is what PyCharm tells me) I tried to see how it would work without the update plot method in the contribution tab class in LCA_results_tabs.py, this hugely reduced the issue in terms of time taken, but still took about ~20s for me to switch to the absolute view in contribution analysis.

To get back to the fix though;

adding results = np.where(results == 0, np.nan, results) between results = np.hstack((data.reshape((-1, 1)), np.arange(data.shape[0]).reshape((-1, 1)))) (line 51-52) and return results[np.argsort(np.abs(data))[::-1]][:limit, :] (line 52) in the ContributionAnalysis class in contribution.py replaces the 0 values with np.nan before the array is sorted.

Does anybody know where I can make pull requests to BW2, I cant immediately find the repo?

edit: nope wait, I got a bit too excited there, this gives KeyError I`ll get back to this soon.

dgdekoning commented 4 years ago

After the edit: Awww man, I got so excited when I saw the email.

Thanks for continuing to work on it!

If it does turn out to be something inside brightway2 itself, you can find all of the repositories here: https://bitbucket.org/cmutel.

marc-vdm commented 4 years ago

Allright, actually fixed it now: tldr, Im replacting0withnp.nanin the output and dropping rows with onlynp.nan` as values

I've run lots print statements to see how much time a function takes and there are two slow sections, the first is the plotting (which is obviously not fun for matplotlib with ~2k-15k variables) The other, I've narrowed it down partially. In: multilca.py > class contributions > def get_labels > for k in keys: > elif k in AB_metadata.index: > the [str(l) for l in list(AB_metadata.get_metadata(k, fields))] part in translated_keys.append(separator.join() seems to scale ~linearly with input size.

The function is happy with just a few (I guess n<500) values to get the keys for but gets slow quickly. This means that the issue is with the input size (the 0 values).

So that meant that the 0 values really needed to be cut out.

We (AB), turn the results from xx Contributions into a pandas ds in multilca.py > class Contributions > def get_labelled_contribution_dict. I've added df = df.replace(0, np.nan).dropna(how='all') before checking for a mask and this resolves the issue (even without other errors this time).

I'll put in a PR shortly

dgdekoning commented 4 years ago

Yeah, was going to mention that the issue was probably coming from the labeling- and / dictionary-creating methods in Contributions but it looked like you were already there when you were mentioning the sort_array method from brightway. Will look at the pull-request as soon as possible!