Closed tomasvanoyen closed 1 year ago
Hi,
Yes, PVNet is the primary model that is in production at OCF. This model is the production one for creating region-level PV forecasts, so not for individual sites.
PVMetNet has been trained on site-level forecasts, so slightly different, and we haven't tried it with the regional PV forecasting, so we can't compare directly between the two currently.
We haven't tried DGMR for PV predictions, we were trying that model for predicting future satellite imagery, but not PV directly, so also cannot compare that.
Overall, PVNet is fairly simple and has pretty good performance for regional forecasts, so the combination of operational simplicity and performance is why we have gone with it.
Hi @jacobbieker,
thank you for the prompt response, it clarifies the current choice.
In addition, is there any information available on the validation accuracy of the published weights on huggingface. It is nice to have a pretrained model, yet difficult to use with confidence without some accompanying information.
Regards and thanks!
Tomas
Hi,
Our metrics for those models are a bit scattered, but @dfulu would have the most up to date results for PVNet.
PVMetNet has WandB training plots here: https://wandb.ai/openclimatefix/PvMetNet which has the validation and training loss. They have an overall error of around 5.7% MAE for the best performing models.
DGMR we don't really have metrics for that, the models were minimally trained for satellite data prediction, but there was difficulties in how large the model was on our hardware.
Thanks, Jacob
Thanks for sharing the wandb.ai logs. Most insightful! Regards, Tomas
Hi @tomasvanoyen, the experimental results for PVNet are also hosted on wandb. If you check out the PVNet model card on huggingface it links to the exact wandb training run of the model. I've kept this up to date alongside the model weights.
There are a few gotcha's in amongst the experimental results. One of these is that we used to normalise using the installed capacity of each GSP. Now we normalise by the effective capacity of each GSP. The effective capacity takes into account that solar panels get less efficient over time so is always less than the installed capacity. Our metric scores are calculated on the normalised outputs, and this means the metric after this change are higher than they were before this change. We are now dividing by a smaller number, so the values (and therefore errors) are slightly larger now. However, some back of the envelope calculations have suggested our new scores are better than our old ones, and we definitely see an improvement in our predictions made live.
Also, on the README for this repo there is a link to a google doc with some assorted experimental notes. It's not a particularly clean log, but I've tried to keep it moderately tidy.
Hi @dfulu,
thanks for additional information.
The wandb link in your message and also the one on HuggingFace unfortunately leads to a 404.
I guess a quick fix can lead to the correct training logs.
Thanks in any case.
Regards
Hey @tomasvanoyen, we changed the permissions on that wandb project, so you should be able to view the logs now: https://wandb.ai/openclimatefix/pvnet2.1
Hi all,
I am trying to grasp the entire project, and it appears to me that the current weapon of choice for PV-yield predictions is PVNet (as it is the model currently in production, and that is the model updated most frequently on HuggingFace).
Yet, I fail to find some documentation on the rational behind the choice for this architecture. In particular, does this architecture perform better than MetNet / MetNet2 or DGMR? E.g. how does PVNet2 compare with PVNetMetNet, and what is the rational around picking one above the other.
Thanks!
Tomas