tahanakabi / DRL-for-microgrid-energy-management

We study the performance of various deep reinforcement learning algorithms for the problem of microgrid’s energy management system. We propose a novel microgrid model that consists of a wind turbine generator, an energy storage system, a population of thermostatically controlled loads, a population of price-responsive loads, and a connection to the main grid. The proposed energy management system is designed to coordinate between the different sources of flexibility by defining the priority resources, the direct demand control signals and the electricity prices. Seven deep reinforcement learning algorithms are implemented and empirically compared in this paper. The numerical results show a significant difference between the different deep reinforcement learning algorithms in their ability to converge to optimal policies. By adding an experience replay and a second semi-deterministic training phase to the well-known Asynchronous advantage actor critic algorithm, we achieved considerably better performance and converged to superior policies in terms of energy efficiency and economic value.
MIT License
180 stars 43 forks source link

There's something I don't quite understand. #6

Open xiaobaidiyiren opened 7 months ago

xiaobaidiyiren commented 7 months ago

'1% of the hourly wind energy generation records from a wind farm in Finland'What's the 1% for? ''32 e/MW. The estimated Levelized Cost of Energy (LCoE) of the latest wind farm project in Finland''but in your code,Why did the cost become 3.2 euros in your program? I have a feeling that your market electricity price of 5.48 euros per KW is also problematic.