Open thelightonmyway opened 3 months ago
Hi @thelightonmyway, thanks for the question!
When you say the model becomes more robust with more modes, do you mean the reconstruction error decreases? If so, I agree - adding more modes improves the model's performance in reconstructing the original signal, achieving zero error when all modes are used.
However, the Bootstrapper
helps identify up to which mode the result is significant. This means it truly represents a pattern not likely due to noise in your data. If you want to interpret the obtained patterns (in contrast to optimally reconstruct your original data), you only consider the first few significant modes. If a mode is insignificant, all higher modes are also insignificant. So your result of [True, False, True, True, False] suggests only the first mode is significant.
Does that clear things up?
Hi @thelightonmyway, thanks for the question!
When you say the model becomes more robust with more modes, do you mean the reconstruction error decreases? If so, I agree - adding more modes improves the model's performance in reconstructing the original signal, achieving zero error when all modes are used.
However, the
Bootstrapper
helps identify up to which mode the result is significant. This means it truly represents a pattern not likely due to noise in your data. If you want to interpret the obtained patterns (in contrast to optimally reconstruct your original data), you only consider the first few significant modes. If a mode is insignificant, all higher modes are also insignificant. So your result of [True, False, True, True, False] suggests only the first mode is significant.Does that clear things up?
I understand. Thank's for your reply.
I have used EOFBootstrapper from xeofs.validation to make a significant test recently and I got a result. However, my result seems incorrect as the first model and the third model passed the significant test but the second model did not. As we know, EOF analysis usually becomes more robustness as the number of models increases. Therefore, I‘m not clear on the result. It's my code:
fg=xr.open_dataset("/mnt/e/wind_global/obs/masked/E-OBS_wind_monthly_mean_1×1_masked.nc").fg[:-6,:,:]
a=5
model = EOF(n_modes=a, use_coslat=True)
model.fit(fg, dim="time")
components=model.components()
scores=model.scores(normalized=False)
expvar=model.explained_variance_ratio()
n_boot = 100000
bs = EOFBootstrapper(n_bootstraps=n_boot)
bs.fit(model)
bs_expvar = bs.explained_variance()
ci_expvar = bs_expvar.quantile([0.005, 0.995], "n")
q005 = ci_expvar.sel(quantile=0.005)
q995 = ci_expvar.sel(quantile=0.995)
is_significant = q005 - q995.shift({"mode": -1}) > 0
n_significant_modes = (is_significant.where(is_significant).cumsum(skipna=False).max().fillna(0))
print("{:} modes are significant at alpha=0.01".format(n_significant_modes.values))
It's my result:
By the way, it seems that the code in your workbench,
n_significant_modes = ( is_significant.where(is_significant is True).cumsum(skipna=False).max().fillna(0) )
have a little problem and I think it will be better byis_significant.where(is_significant).
Thank you in advance.