ncx-co / ifm_deferred_harvest

Documents, Data, and Code. The NCX Methodology For Improved Forest Management (IFM) Through Short-Term Harvest Deferral.
Apache License 2.0
11 stars 1 forks source link

Public Comment: 45 (Sarah Wescott) #45

Closed ncx-gitbot closed 2 years ago

ncx-gitbot commented 2 years ago

Commenter Organization: Finite Carbon

Commenter: Sarah Wescott

2021 Deferred Harvest Methodology Section: 11 (Appendix A)

Comment: The additionality framework for this methodology hinges on a baseline approach that estimates the fraction of carbon at risk during the project activity period. This fraction of carbon at risk considers the probability that a given area would be harvested during the period, and the portion of standing carbon that could be expected to be removed. Appendix A offers a framework for making these estimations. However, the appendix does not provide a specific model, but rather a high-level theoretical description of how a model should be formulated. Two independent project developers using this methodology would likely end up with different assumptions, different strategies for implementing the model, and potentially very different results. This leads to a situation where two projects utilizing this methodology may result in projects of highly variable quality, as one project may take a very conservative approach to developing the baseline model and assess low probability of harvests in a given area, whereas another developer may assert a high probability of harvest with high carbon loss. In other words, without a standardized baseline objective or common practice metric, consistent quality will not be ensured across projects utilizing this methodology.

We expect the authors may point to the following statement in Appendix A as assurance of consistency in this approach: “It is expected that baseline models developed for use with this methodology will subject to review by an expert panel.” However, having it be “expected” that baseline models will be subject to this review doesn’t seem like strong enough language, and we don’t believe there is sufficient detail to rely on this requirement for consistency. Will there be consistency in who is selected for the panel? Against what criteria will the panel be assessing baseline models developed for use with this methodology? Will the panel prioritize measures that ensure conservativeness in their review? Will they prioritize accuracy? Or will they simply assess that a given model doesn’t diverge from the high-level method described in Appendix A? Will the VVB also review a given model, or will they simply check that the expert panel signed off on it? If the former, will specific guidance be provided for model validation and verification? More details should be provided in the methodology regarding model review. It was also stated on the methodology webinar that the results of the expert panel review are unlikely to be made public, and we are concerned this approach lacks much-needed transparency. While any two projects may not be created equal under this methodology, on top of that, it will be incredibly challenging for credit buyers and members of the public to assess which of the two is the higher quality project. This is deeply problematic for an already opaque market.

Proposed Change: Disclosure of the baseline modeling creation process, results of the expert panel review, and clear identification of resulting claims on carbon at risk is imperative for judging the additionality of offsets created using this methodology. By disclosing model assumptions and results, experts and laypeople alike may assess the reasonableness of claims made under this methodology. We recommend Verra make this information available, to allow the public to evaluate the integrity of credits produced by developers under this methodology.

ncx-gitbot commented 2 years ago

NCX response: We appreciate comments noting that the structure and performance of the baseline model used within this methodology is strongly influential on the predicted and realized climate impact of projects. Our revised methodology increases transparency rather than following an expert review process. This includes both detailed documentation of particular models used, as well as sharing benchmarking and performance information for baseline models. Finally, the revised approach to uncertainty explicitly accounts for imprecision in the baseline model in calculating the final number of credits generated from projects developed under this methodology.