Hi,
I’ve got a little bit confused with the metrics and the evaluate_iterative_forecast function you’ve defined in score.py. Considering that the output of mean(xr.ALL_DIMS) is the average over all the dimensions, in the evaluate_iterative_forecast function, at first, you extract the values for a step according to the steps in the lead_time dimension; then, you shift the time dimension 1 lead_time step (why?); and compute the metric over all the dimensions including longitude, latitude, and time. This means that you compute the mean value for the time dimension as well. So, in that case, you accumulate the error for all time steps in Figure 2 in the paper. Am I right?
Moreover, could you please explain why the RMSE metric for the climatology and weekly climatology methods in Figure 2 are constant over time?
Furthermore, could you please explain what is N_forecast in the RMSE formula written in the paper?
Hi, I’ve got a little bit confused with the metrics and the evaluate_iterative_forecast function you’ve defined in score.py. Considering that the output of mean(xr.ALL_DIMS) is the average over all the dimensions, in the evaluate_iterative_forecast function, at first, you extract the values for a step according to the steps in the lead_time dimension; then, you shift the time dimension 1 lead_time step (why?); and compute the metric over all the dimensions including longitude, latitude, and time. This means that you compute the mean value for the time dimension as well. So, in that case, you accumulate the error for all time steps in Figure 2 in the paper. Am I right? Moreover, could you please explain why the RMSE metric for the climatology and weekly climatology methods in Figure 2 are constant over time? Furthermore, could you please explain what is N_forecast in the RMSE formula written in the paper?