Closed pjamesjoyce closed 1 month ago
Quick update:
The following 2 ways of setting up normalisations both give the same result as the 'correct' way:
normalizations = {EF31_normalisation[i]:[EF31[-(i+1)]] for i in range(len(EF31))}
normalizations = {}
normalizations[EF31_normalisation[0]] = EF31
for i in range(1, len(EF31)):
normalizations[EF31_normalisation[i]] = []
I'm afraid I don't understand the matrices well enough to propose a fix...
Normally normalization is the total amount of a substance emitted per year per person (or similar), so it's a bit strange for me to think of a normalization for each impact category. That being said, this does feel like there is a bug, and certainly an opportunity for better documentation. I am looking into it.
Sorry @pjamesjoyce, I had poor tests which only used one normalization and one weighting. Got too excited about defining a custom __matmul__
to think about what it mean to do combinatorial multiplication. Should be fixed now.
Perfect! Thank you @cmutel - works like a charm :)
This might be a misunderstanding at my end, but when I use normalisation and weighting in
MultiLCA
all impact categories get normalised by all normalisations (and weighted by all weightings), not just the ones specified in themethod_config
. This proliferates the number of calculations, seemingly unnecessarily i.e. for 10 items and (for example) the 16 EF3.1 methods, once normalised and weighted you end up with10 * 16 * 16 * 16 = 40960
scores, of which only 160 are relevant (i.e. the 10 * 16 normalised and weighted scores).There's a full example in this gist : https://gist.github.com/pjamesjoyce/950911d4fde7f15fa6b851fe0b5cb8ad
But to summarise, I've created normalisations and weighting for the EF3.1 impact categories (using the JRC factors) and then added either
'normalisation'
or'weighting'
as an extra tuple element for each impact category e.g. the normalisation foris
When setting up the
MultiLCA
, for'normalizations'
in themethod_config
I have a dictionary with 16 keys, one for each of the EF3.1 normalisation categories, which each correspond to a list with one item in, which is the corresponding EF3.1 impact category. i.e.:so it looks like this:
Similarly, the weightings are set up so that each weighting refers to one normalisation category:
i.e.
Given this setup, I'd expected
MultiLCA.normalize()
to only apply the acidification normalisation factors to the acidification impact category and so on, but it appears that it applies all normalisations (and weightings) to all impact categories.So for 10 random processes:
i.e. rows 0 to 9 give the normalised and weighted acidification scores for each of the items, but in rows 10 to 19 the climate change impact is being normalised by the acidification factors, hence the result is zero (and the
mcla.scores
dict consists of 40,960 items).Am I setting it up incorrectly? or is it more complex than it's worth to apply normalisation and weighing specifically and the best bet is to filter out the non-useless results in post-processing?