Open laura-zabala opened 2 years ago
Thanks a lot for the PR, @laura-zabala! I've gone through your additions and they are looking great. As I see it, now we have two main options to move forward:
My opinion is that we should go for 1. In that case, the steps I suggest to follow (may be missing some) are:
predict_error
method in forecaster.py. predict_error
to get_forecast
. Note that hp
can be avoided as an argument since it can be parsed from forecast
. We probably want to use default argument values that lead to deterministic forecast, that is: F0=0, K0=0, F=0, K=0, mu=0
. Forecast
and Forecast_Parameters
API calls to pass the new arguments through forecast_parameters
. test_bare_forecast_uncertainty
. Something very similar to what is already implemented in SimulateError.ipynb
. This allows getting comprehensive reference results and having a fully independent test. We may want to replicate this test for different uncertainty scenarios. test_get_uncertain_forecast
for testcase2
(single-zone) and for testcase3
(multi-zone). These two tests come together because only need to be implemented once in PartialForecasterTest
which is extended in both ForecasterSingleZoneTest
and ForecasterMultiZoneTest
. TY for your feedback Javier! I also think it's better to go for option 1. I'll check your suggestions
Additional step:
Hey all, thanks for the PR and comments already! Just my thoughts here, sorry they're a bit late. But I also vote for 1. However, I might suggest being a little more strategic in preparation for other variables, and in particular other error models. I suggest calling predict_error()
maybe predict_error_AR()
to be clear there may be other error prediction models later other than just AR.
I'm also thinking the approach of propagating parameters directly to get_forecast
will become unsustainable once other error methods are introduced with their own parameters, and some variables we find may warrant the same error methods but different parameters (in which case for instance there would be two sets of the current parameters, one for drybulb temperature and one for, say, humidity if both need AR). This will require more complex architecture/method of relaying what parameters go with which methods go with which variables. Maybe its ok to proceed as now proposed, and cross that refactoring bridge when we get to it as a result of new error methods needing to be introduced to new variables.
I updated the predict_error()
to predict_error_AR()
.
I also agree that once we have to define more parameters, we'll get to too many values to be propagated in get_forecast.
Maybe we can continue working with that and have on the "to-do" an alternative way to define it.
@laura-zabala Thanks. Sure, let's keep working this way through all what's needed from top to bottom to implement and keep track of some of these "to-dos."
Additional step:
@wfzheng I reviewed the implementation of the function error_emulator.py
and how it is implemented in forecaster.py
and I think the implementation is correct. I have the following doubts:
forecaster.py
[Line 70] : Why is the generated error subtracted to the total temperature, instead of adding it?forecaster.py
: The irradiation is limited, not just to avoid negative values but also to limit exceeded values from a predefined max and min limits [Lines 94-100]. Should we include something analogous to the temperature?To-do's from my side:
@wfzheng In the forecaster.py
file, in the Forecaster class
, I miss the description of the parameters: weather_temperature_dry_bulb, weather_solar_global_horizontal, seed, category, plot.
@laura-zabala
*> forecaster.py
[Line 70] : Why is the generated error subtracted to the total temperature, instead of adding it?**
Answer: In the previous code found in the notebook UNCERTAINTY_EMULATOR_AUTOREG.ipynb , the errors are calculated using the formula errors = (y.iloc[:, 1:] - y_hat.iloc[:, 1:]). This computation derives the errors by subtracting the predicted values (y_hat) from the actual values (y). Therefore, to generate the forecast (forecast = y_true - errors), we subtract these errors from the true values (y_true).
*> forecaster.py
: The irradiation is limited, not just to avoid negative values but also to limit exceeded values from a predefined max and min limits [Lines 94-100]. Should we include something analogous to the temperature?**
Answer: I think the current implementation, where we limit the maximum forecasted irradiation (forecast['HGloHor']) to twice the true values, is practical and should generally cover the expected variability in the data without introducing unrealistic predictions. However I'd appreciate your insights, @dhblum, on whether you think this approach might lead to any potential issues or if there's a better way to handle extremes in our forecast model. Looking forward to your feedback!
Hey @laura-zabala, can you:
laura-zabala:issue135_forecastUncertainty
branch with the latest ibpsa:issue135_forecastUncertainty
Then I think I'll be able to review things again more easily within this PR.
Hi, @dhblum I think it's ready now, sorry for the late response
The function for the uncertainty emulator and an example notebook for application are included.