ibpsa / project1-boptest

Building Optimization Performance Tests
Other
105 stars 69 forks source link

Issue135 forecast uncertainty #482

Open laura-zabala opened 2 years ago

laura-zabala commented 2 years ago

The function for the uncertainty emulator and an example notebook for application are included.

javiarrobas commented 2 years ago

Thanks a lot for the PR, @laura-zabala! I've gone through your additions and they are looking great. As I see it, now we have two main options to move forward:

  1. First, further integrate existing functionality into BOPTEST architecture, then, extend functionality (e.g. for emulating uncertainty of variables other than temperature).
  2. First, extend functionality, then integrate into BOPTEST architecture.

My opinion is that we should go for 1. In that case, the steps I suggest to follow (may be missing some) are:

laura-zabala commented 1 year ago

TY for your feedback Javier! I also think it's better to go for option 1. I'll check your suggestions

laura-zabala commented 1 year ago

Additional step:

dhblum commented 1 year ago

Hey all, thanks for the PR and comments already! Just my thoughts here, sorry they're a bit late. But I also vote for 1. However, I might suggest being a little more strategic in preparation for other variables, and in particular other error models. I suggest calling predict_error() maybe predict_error_AR() to be clear there may be other error prediction models later other than just AR.

I'm also thinking the approach of propagating parameters directly to get_forecast will become unsustainable once other error methods are introduced with their own parameters, and some variables we find may warrant the same error methods but different parameters (in which case for instance there would be two sets of the current parameters, one for drybulb temperature and one for, say, humidity if both need AR). This will require more complex architecture/method of relaying what parameters go with which methods go with which variables. Maybe its ok to proceed as now proposed, and cross that refactoring bridge when we get to it as a result of new error methods needing to be introduced to new variables.

laura-zabala commented 1 year ago

I updated the predict_error() to predict_error_AR(). I also agree that once we have to define more parameters, we'll get to too many values to be propagated in get_forecast. Maybe we can continue working with that and have on the "to-do" an alternative way to define it.

dhblum commented 1 year ago

@laura-zabala Thanks. Sure, let's keep working this way through all what's needed from top to bottom to implement and keep track of some of these "to-dos."

laura-zabala commented 1 year ago

Additional step:

laura-zabala commented 6 months ago

@wfzheng I reviewed the implementation of the function error_emulator.py and how it is implemented in forecaster.py and I think the implementation is correct. I have the following doubts:

To-do's from my side:

laura-zabala commented 4 months ago

@wfzheng In the forecaster.py file, in the Forecaster class, I miss the description of the parameters: weather_temperature_dry_bulb, weather_solar_global_horizontal, seed, category, plot.

wfzheng commented 3 months ago

@laura-zabala *> forecaster.py [Line 70] : Why is the generated error subtracted to the total temperature, instead of adding it?** Answer: In the previous code found in the notebook UNCERTAINTY_EMULATOR_AUTOREG.ipynb , the errors are calculated using the formula errors = (y.iloc[:, 1:] - y_hat.iloc[:, 1:]). This computation derives the errors by subtracting the predicted values (y_hat) from the actual values (y). Therefore, to generate the forecast (forecast = y_true - errors), we subtract these errors from the true values (y_true).

*> forecaster.py : The irradiation is limited, not just to avoid negative values but also to limit exceeded values from a predefined max and min limits [Lines 94-100]. Should we include something analogous to the temperature?** Answer: I think the current implementation, where we limit the maximum forecasted irradiation (forecast['HGloHor']) to twice the true values, is practical and should generally cover the expected variability in the data without introducing unrealistic predictions. However I'd appreciate your insights, @dhblum, on whether you think this approach might lead to any potential issues or if there's a better way to handle extremes in our forecast model. Looking forward to your feedback!

dhblum commented 1 month ago

Hey @laura-zabala, can you:

  1. update your laura-zabala:issue135_forecastUncertainty branch with the latest ibpsa:issue135_forecastUncertainty
  2. merge PR https://github.com/laura-zabala/project1-boptest/pull/3 which Wanfu I think uses to address my comments.

Then I think I'll be able to review things again more easily within this PR.

laura-zabala commented 2 weeks ago

Hi, @dhblum I think it's ready now, sorry for the late response