openclimatefix / PVNet

PVnet main repo
MIT License
17 stars 3 forks source link

Add quantile regression support #44

Closed dfulu closed 1 year ago

dfulu commented 1 year ago

Pull Request

Description

Add option to train PVNet with quantile regression. An example training run using these new changes is hosted on wandb. Figure at bottom shows some results from the validation set.

These changes are:

Checklist:

media_images_val_forecast_samples_batch_idx_1_573_8b7bed89957cfd499875

codecov[bot] commented 1 year ago

Codecov Report

Merging #44 (e06ccbd) into main (2bb19c8) will decrease coverage by 1.09%. The diff coverage is 30.90%.

@@            Coverage Diff             @@
##             main      #44      +/-   ##
==========================================
- Coverage   69.46%   68.37%   -1.09%     
==========================================
  Files          22       22              
  Lines        1575     1622      +47     
==========================================
+ Hits         1094     1109      +15     
- Misses        481      513      +32     
Impacted Files Coverage Δ
pvnet/training.py 0.00% <0.00%> (ø)
pvnet/utils.py 24.00% <28.57%> (-0.14%) :arrow_down:
pvnet/models/base_model.py 39.78% <30.00%> (-1.28%) :arrow_down:
pvnet/models/multimodal/multimodal.py 96.90% <100.00%> (+0.06%) :arrow_up:

:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more

peterdudfield commented 1 year ago

Amazing, well done for doing this!!!!

Do you fancy upgradeing to nowcasint_datamodel=1.4.14. and then in yourconvert anding a properties' field. This field should be a dictionary like forecast_value_sql.properties = {'plevel_10':forecast_value.plevel_10, 'plevel_90':forecast_value.plevel_90}

peterdudfield commented 1 year ago

Also have you added stuff to the PVnet public document? WOuld great to get a sense of how the MAE has changed? and how good the quantiles are?

dfulu commented 1 year ago

Amazing, well done for doing this!!!!

Do you fancy upgradeing to nowcasint_datamodel=1.4.14. and then in yourconvert anding a properties' field. This field should be a dictionary likeforecast_value_sql.properties = {'plevel_10':forecast_value.plevel_10, 'plevel_90':forecast_value.plevel_90}

@peterdudfield I was updating these libraries in a separate issue #45. I'll work on this after that pull request is merged to main and then merge it into this.

dfulu commented 1 year ago

Also have you added stuff to the PVnet public document? WOuld great to get a sense of how the MAE has changed? and how good the quantiles are?

@peterdudfield I haven't yet, but can do. At least the wandb run linked above shows the MAEs. In short the MAE for this model trained on quantile loss is the same as when the model is trained only to predict MAE. The get identical scores (allowing for some jitter in the training/validation curves).

dfulu commented 1 year ago

@peterdudfield Also, this isn't quite a full solution yet since we don't have a way to sum the quantiles to get a probabilistic forecast for the national sum.

For example, if could sum all of the 10th percentile lines from all GSPs, and all the 90th percentile lines, and use this for the range for the PVNet national estimate. However, we'd end up with to large of a 10-90th percentile range for the national sum. This option would essentially to assume that the prediction errors will all be perfectly correlated.

Alternatively, we might be able to do something similar to adding the uncertainties in quadrature. But this assumes that the errors in the prediction are perfectly uncorrelated, and so we'd end up with a 10-90th percentile range which is too small.

The reality is likely to be somewhere in the middle of these two, which partial correlation of errors. I think the best option might be to train another very small model to estimate the quantiles of the national sum from the output of this model

peterdudfield commented 1 year ago

@peterdudfield Also, this isn't quite a full solution yet since we don't have a way to sum the quantiles to get a probabilistic forecast for the national sum.

For example, if could sum all of the 10th percentile lines from all GSPs, and all the 90th percentile lines, and use this for the range for the PVNet national estimate. However, we'd end up with to large of a 10-90th percentile range for the national sum. This option would essentially to assume that the prediction errors will all be perfectly correlated.

Alternatively, we might be able to do something similar to adding the uncertainties in quadrature. But this assumes that the errors in the prediction are perfectly uncorrelated, and so we'd end up with a 10-90th percentile range which is too small.

The reality is likely to be somewhere in the middle of these two, which partial correlation of errors. I think the best option might be to train another very small model to estimate the quantiles of the national sum from the output of this model

Yea, perhaps before training a model, could first do some analysis on how they are correlated?

peterdudfield commented 1 year ago

@dantravers might actually know this.

the problem is how to add up GSP uncertainites to make the national one. Assuming normal distributions

  1. If uncorreclated, can square them and sum, then square root
  2. if correctted, can just up them up.

one way to perhaps do it would be to do a mixture of 1. and 2. and see what is best.

As you suggested @dfulu could do a seperatel model and train this, but perhaps a simple way first is a good start

dfulu commented 1 year ago

@peterdudfield it will take a bit of time to do the analysis. The biggest component of this is preparing the input dataset since it takes 317 samples at the same timestamp to give us 1 data point for the correlations or to train a model. I'll start working on this on a separate branch

dantravers commented 1 year ago

On adding up the uncertainties: Well - as luck should have it, I was just talking about this! I think the right thing to do is: Create a correlation matrix C, where each C_i_j is the correlation of the timeseries GSPerror_i, and GSPerror_j, where i, j are the index of the GSP (from 1 to n). C is nxn, where there are n GSPs. GSPerror timeseries is formed by working out the forecast minus the actual for the GSP for all historical periods we have (for the horizon we are looking at). We could create this based on some backetst / cross validation period. (I've actually calculated this for many sites in GSPs in my PhD stuff, but there are some gaps in sites, so it would be better to do for all GSPs with the same model I think. )

Strictly speaking, I think you'd have a correlation matrix for each forecast horizon. I expect the coreraltions will be lower for shorter time horizons, but higher for longer horizons, as you get more idiosyncratic weather changes in short horizons, but could be wrong!

Then the (median-quantile) (for each up and down quantile) would be Q@C@Qt, where Q is the nx1 vector of (median-quantile level) for each GSP, Qt is Q transpose, and @ is dot product.

dfulu commented 1 year ago

To keep this up date with conversations in person. I think @dantravers's idea is a pretty good approximation. However, I think its would be almost the same amount of work to go to what feels like a "full" solution of training a model on all of the GSP predictions to estimate the quantiles. That is the current goal

peterdudfield commented 1 year ago

Is it worth adding a test, to check the probablsitic works?