RLGC-Project / RLGC

An open-source platform for applying Reinforcement Learning for Grid Control (RLGC)
Other
108 stars 30 forks source link

Bug in the 2-area Kundur case #16

Open frostyduck opened 3 years ago

frostyduck commented 3 years ago

I've had some free time in the last few days and I probably figured out a bug where, during training, the agent cannot overcome the reward boundary of -602 in your Kundur 2-area case. The fact is that during training and testing in the environment (Kundur's scheme), short circuits are not simulated. I checked it out. That is, the agent learns purely on the normal operating conditions of the system. In this case, the optimal policy is never to apply the dynamic brake, i.e. actions are always 0, which corresponds to the specified value of the reward (-602 or 603).

I'm guessing it has something to do with the PowerDynSimEnvDef modifications. Initially, you used PowerDynSimEnvDef_v2, and now I am working with PowerDynSimEnvDef_v7.

qhuang-pnl commented 3 years ago

Thanks for your feedback and your email. I will look into this and reply to you asap.