Open ahuang11 opened 4 years ago
You can see how @matteodefelice did it in his notebook (https://colab.research.google.com/drive/1wWHz_SMCHNuos5fxWRUJTcB6wqkTJQCR)
brier_score = xs.brier_score(obs > obs.quantile(2/3, dim='time'),
(fct > fct.quantile(2/3, dim='time')).mean("member"))
baseline = xs.brier_score(obs > obs.quantile(2/3, dim='time'), 2/3)
bss = 1 - (brier_score / baseline)
Yeah, I think both ways are valid, just a different baseline; maybe his can be used as the default if ref=None
def brier_skill_score(obs, fct, threshold, ref=None):
bscore_fct = (xs.brier_score(obs > threshold, (fct > threshold).mean('member'))
if ref is None:
bscore_ref = xs.brier_score(obs > obs.quantile(2/3, dim='time'), 2/3)
else:
bscore_ref = (xs.brier_score(obs > threshold, (ref > threshold).mean('member'))
bskill = 1 - (bscore_fct / bscore_ref)
return bskill
I think we should somehow come up with a pattern/wrapper that translates every skill into a skill score.
The formula would be from https://www.cawcr.gov.au/projects/verification/
anyone ideas how to do this with few lines of code?
Is it applicable across all scores though? I am only familiar with brier skill score (first time heard of RMSE skill score)
I think so. In climpred we have a metrics class with attributes min max perfect. Maybe this would be nice to move to xskillscore
I think this sort of thing should be at climpred
. a
and b
in xskillscore
are two time series for instance. Let's say a
is the forecast and b
the verification product. To get BSS, you'd evaluate a
against b
, and then a persistence derivation of b
against b
for instance. Since we manage reference forecasts at climpred
I'd think over there we would just call xskillscore
BS twice.
I guess with the addition of the sign test, this could still fit here. If you hand xskillscore
two different forecasts (one being the dynamical and the other the reference) you can compute skill scores. climpred
would then wrap that to do nice looping through all the leads.
this is could be a template: https://github.com/bradyrx/climpred/blob/8a3fc954df2043f998987a1964059a9dc0d2e11c/climpred/metrics.py#L128-L140 were we would need the perfect
attribute.
But many we should overdo things. xs
is perfect for doing easy functions xs.metric(a,b,dim)
. skill scores can actually really easy be calculated from this manually. I dont see a large computational benefit of implementing this compared to having this sequentially manual_scoring_function(xs.metric(a,b,dim), reference(*args,**kwargs))
.
We at least shouldnt change the existing API when implementing this.
See https://github.com/xarray-contrib/xskillscore/issues/284. Leaving open as good discussion here.
By the way, it seems that the line:
baseline = xs.brier_score(obs_final > obs_final.quantile(2/3, dim = 'time'), 2/3)
doesn't work any more, I get this error on my Colab:
AttributeError Traceback (most recent call last)
<ipython-input-68-ad9dcdfd6ce1> in <module>()
----> 1 baseline = xs.brier_score(obs_final > obs_final.quantile(2/3, dim = 'time'), 2/3)
/usr/local/lib/python3.8/site-packages/xskillscore/core/probabilistic.py in brier_score(observations, forecasts, member_dim, fair, dim, weights, keep_attrs)
348 res = (e / M - o) ** 2 - e * (M - e) / (M ** 2 * (M - 1))
349 else:
--> 350 if member_dim in forecasts.dims:
351 forecasts = forecasts.mean(member_dim)
352 res = xr.apply_ufunc(
AttributeError: 'float' object has no attribute 'dims'
EDIT: yes, now it requires an xarray object and it doesn't work any more with numbers.
Thanks for reporting @matteodefelice Can you please open an issue with this? Conversion from number to xr.dataarray can be easily added then
Has anyone an example on how to calculate the skill score for the RPS or CRPS? I still haven't found the way to do that...
We have crpss in climpred.
For crpss look into the proper scoring example
@matteodefelice I believe rpss is (kind of) in climpred as well https://github.com/pangeo-data/climpred/blob/main/climpred/tests/test_probabilistic.py#L252. crpss is here: https://github.com/pangeo-data/climpred/blob/main/climpred/metrics.py#L2176 Porting here would be appreciated.
Thanks @aaronspring but the real difficulty for me is calculating the RPS of the climatology, but I will find a way. Thanks to everyone for sharing.
generalized for perfect-models and hindcasts.
I create a fake 1-member forecast from groupby(dayofyear).
that's a good idea. I did it, and the results seem reasonable. Given that I am working with seasonal averages (one point per year), this is what I have done:
cat_edges = obs.quantile(q = [1/3, 2/3], dim = 'year').rename({'quantile':'category_edge'})
rps_clim = xskillscore.rps(obs, obs.mean(dim = 'year').expand_dims({'member':1}), cat_edges, dim = 'year', member_dim='member')
The RPSS is comparable with the BSS, so I think this should work.
Just came across this package. Nice work. What is the relationship between climpred and xskillscore? Some metrics are only available in climpred and are missing in xskillscore. A function to calculate the skillscore would be actually really nice (even if it was just the basic function that @aaronspring mentioned here.
Relationship: climpred imports (nearly?) all xskillscore metrics by wrapping xskillscore. climpred might have some metrics also defined only in climpred, such as skillscores for some metrics, i.e. normalised mse as mse/std or msess = 1 - nmse
Any objections in including these missing metrics in xskillscore? Seems to me the better place than having two packages with mixed functionalities. All benchmarking related scoring functions should be available in xskillscore IMO (and not climpred). Re skill score (the topic of this issue): I was a bit surprised that the package (which is called xskillscore) does not include a function to calculate skillscores. It's a standard benchmarking metric that seems to me very suitable for this package?!
Any objections in including these missing metrics in xskillscore? Seems to me the better place than having two packages with mixed functionalities. All benchmarking related scoring functions should be available in xskillscore IMO (and not climpred). Re skill score (the topic of this issue): I was a bit surprised that the package (which is called xskillscore) does not include a function to calculate skillscores. It's a standard benchmarking metric that seems to me very suitable for this package?!
No objections to additional metrics here that are missing and in climpred. Just make sure they give the same result
I think we wanted to keep the api consistent for all metrics: metric(forecast, observation) for skillscore you'd need to pass a reference skill somehow. I'm for that and suggest to have this api discussion first.
https://confluence.ecmwf.int/display/FUG/12.B+Statistical+Concepts+-+Probabilistic+Data I think it's just:
where
ref
can be persistence or climatology (where each year of the climatology is a member)