NINAnor / ecosystemCondition

This repository is for documenting the design and calculation of indicators for ecosystem condition in Norway
https://ninanor.github.io/ecosystemCondition/
Creative Commons Attribution 4.0 International
0 stars 3 forks source link

Validate FPI indicators #135

Open anders-kolstad opened 11 months ago

anders-kolstad commented 11 months ago

We can validate the functional plant indicators better. For example, there is a known bias in the underlying data sets (the GAD's) #134. This could cause arbitrary spatial patterns in the indicator values. There is also a question about where to put the thresholdvalue. Currently it is defined from a mathematical property (using quantiles). Are these threshold locations actaully in poor condition? We could go in the field and see. Or perhaps better, we would calculate the indicator for nature type localities of known condition and see if the indicator actually correates with the known condition. And is this correation the same across latitudinal and altitudinal gradients?

joatop commented 11 months ago

Well, when we worked out the concept for the fpi-indicators in 2018, we did just this: took case study data of known condition (good or reduced that was; case studies were from forest, mountain, wetland and seminat ecosystems) to see how the case study indicator values compare to the reference distribution from NiN and see if properties of the distribution can be used to define scaling values. Based on that we landed on generally taking the 95-CI of the reference distributions as scaling values. The issues with the NiN-GAD-lists have also been mentioned in every single report since then: not regional enough, not updated. Of course one could do this kind of validation again and with more case studies for more ecosystem types...we just need to find the data. GRUK and ASO say something about condition, ANO does not. NiN e. Mdir instruks says something about condition, but does not have any species data.

anders-kolstad commented 11 months ago

Excellent! I looked up NINA rapport 1529b and I must admit I had not read it carefully before. Sorry for bringing up old topics disguised as new! The analyses in rap. 1529b are exactly what I was thinking about. Here are some reflections from reading it now. Remember, this is me naively jumping into a long history of discussions and previous work, so I am happy to be pursuaded to think anew about these things :)

Semi-natural meadows

Q1: For semi-natural ecosystems, should we not exclude the other indicators besides Grimes CRS-R?

Semi-natural salt marshes

Q2: Does the last point tell us it should be a one-sided indicator? Q3: Does the first point tell us that the indicator is not very useful, becuse it tells us a site is in good condition, even when we know it is not?

Coastal heathlands

Q4: There are currently 7 indicators developed for semi-natural ecosystems based on plant traits (Tyler et al) and Grimes strategies, yet only two of these are found to respond to ecological conditions. How can we justify keeping all the others?

Q5: In the ANO dataset the nature type is known, so could we perhaps calculate different indicators depending on the nature type (e.g. CRS-R for meadows and coastal heathlands (and possibly boreal heathlands as well), and Ellenberg-L for salt marches. These could be interpreted individually, but also combine into one regional indicator for shifts in mean plant trait values?

Ombrotrophic mires

Q6: This last point says to me that Ellenberg F is not suitable for wetlands, given the validation seen here at least. Why is this indicator still calculated for wetlands? I realize there is a difference in just calculating it and actually using it, but I still don’t think it should be pursued any more before better validation can show that it deserves to be.

Mountain birch forest

Q7: This is a strong response, but also from a quite specific case. The last point is interesting also. Could this imply we could with advantage change to scaling function so that for example if the condition is >0.6 we say that it is 1? This is conservative and draws a better/nicer picture of the forest condition than what is reality, but at the same time we have little justification for saying that these sites differ from the expectation under the reference state. A perhaps smoother function that has about the same effect is to use a sigmoid scaling.

So in conclusion, I think we are creating more trait based indicator than what these validation tests imply we should, and that indicators should possibly be nature type specific, and possibly later aggregated to become ecosystem specific.

joatop commented 11 months ago

In the 1529b report, we used cases where we had a very concrete hypotheses on which indicator's values we expected to have shifted...because we had an expert assessment identifying the ecological pressure that had pulled the ecosystem towards reduced condition. And as such the cases in the report function as a proof-of-concept THAT indicator value distributions can be used to identify deviations from good ecological condition, but not as for identifying WHICH indicators should be used for any given ecosystem. The indicators used in the forest and mountain assessments were defined by the ecosystem experts, and the ones explored in the development work for wetlands, seminat and natopen here were defined by the respective ecosystem expert groups. I think we should calculate analyses for all vegetation indicators that the experts come up with an hypothesis for, but I do share the concern for in the end using them all in the aggregation. After all they all are an expression of only one aspect of the ecosystem, the plant community.

So, I wonder if a better way to handle this in future condition assessments would be to (1) still analyses multiple vegetation indicators in line with the hypotheses from experts, but then (2) only aggregate one of them, and naturally that should be the one with the strongest deviation from good ecol. condition (verste-styrer prinsippet) with the other indicators to generate characteristic, pressure and condition indices, while (3) still showing the results of all of the vegetation indicators.

In that way, the assessment would become more robust against 'indicator inflation' and the vegetation would not get multiple weight, while at the same time retaining flexibility towards the complexity of vegetation responses and their spatial variation (it may well be that some areas show deviations in different vegetation indicators, based on different pressures acting locally)

anders-kolstad commented 10 months ago

Your approach would reduce indicator inflation, which is good. I worry that point 2 would create artifically low condition scores and that indicator sets sould rather we decided a priory before seeing the data.

My feeling is still that indicator development should be data driven and not demand driven, and quality assurance should be in place to halt the propagtion of indicators until they are deemed ready by the developers and reviewd by peers. It is too easy for assessment panels to just say that they want to see all the indicators available, and I fear they will not be able to judge the precition for each of them. For example, some vegetation indicators seem to have biases that give indicator values <<1 even when condition is good, but we do not have the data so show this for most indicators, making it hard to know which one to trust. Assessment panels should not be presented with these data in the forst place I think.