:rocket: Blog post on personal website :link: Reinforcement Learning for Offshore Wind Farm Optimisation
screenshot of animation illustrating optimisation process in quasi-dynamic environment
This repository holds the coded implementation of a conference paper published by NREL, where there was no publicly available code, work was done to replicate some of the key components of the paper. The use case demonstrates the potential of how even rudimental Reinforcement Learning (RL) techniques can be applied to the wake steering control problem and can even lead to an improvement in performance when compared to traditional optimisation techniques.
The code uses NREL's FLORIS - a control-oriented model traditionally used to investigate steady-state wake interactions in wind farm layouts - as a foundation to the using RL as the optimisation.
There are two distinct environments implemented for the problem, in which the q-learning optimisation is carried out for a 'static' environment, where there is no time dependency associated with wake propagation and is the conventional strategy adopted by the FLORIS. The second environment infers a temporal component to the optimisation, creating novel exploration of wake propagation in a RANs based solver and in effect producing a quasi-dynamic control environment and creating a more interesting insight to the formulation of the reward strategy for the problem.
The above illustrates the reward profiles observed during the training of the latterly described environments, with further insight available for the operation of the quasi-dynamic environment through the accompanying animation shown in the repository and in the blog post.Through discretising the state space, q-learning has shown to yield effective results, surpassing improvements proposed by traditional optimisation techniques and packages.
Install python dependencies for repository:
$ pip install -r requirements.txt
:weight_lifting: Training conducted locally on 2018 MacBook pro with 8GB RAM.