Before we embark on improving model predictions using information among years, I think it is worth documenting the scores and performance of year-independent solutions.
[ ] Download all new AOP data from 2020
[ ] Run predictions + CHM filter on 2020 data
[ ] Match trees among years using IoU threshold (0.3? 0.4?)
[ ] Tree Fall metrics
[ ] Growth Metrics
How do we measure success of cross year matching?
For the purposes of between year analysis, tree fall rate can be considered a nuisance factor, since given the enormity of number of trees, the difference in tree counts due to tree fall should be very low.
Therefore the main metric is the difference in count per tile. We want to minimize this difference.
Secondary metrics is the average change in canopy size. We want to make this difference as small as biologically plausible. Same in change in canopy height. The lower the values, the more likely they represent ecological signal.
The other metric to use is the opposite of tree falls, the number of trees missing a prediction in one year, but in another with a CHM height of similar size.
Before we embark on improving model predictions using information among years, I think it is worth documenting the scores and performance of year-independent solutions.
How do we measure success of cross year matching?
For the purposes of between year analysis, tree fall rate can be considered a nuisance factor, since given the enormity of number of trees, the difference in tree counts due to tree fall should be very low.
Therefore the main metric is the difference in count per tile. We want to minimize this difference. Secondary metrics is the average change in canopy size. We want to make this difference as small as biologically plausible. Same in change in canopy height. The lower the values, the more likely they represent ecological signal.