Closed 1900360 closed 2 years ago
Forgive me for being a novice in DRL and CFD, I have tried to adjust parameters, change reinforcement learning algorithm, change CFD training strategy and other methods, including extracting the complete action curve from your results to control drag reduction, but the desired effect has not been achieved. BTW, I set the number of grids to 16200, the discrete format to CrankNicholson, the other settings are no different from your paper, and the algorithm used is also Tensorforce's PPO. The following picture is the control effect made with the action curve in your paper:
The control curve from your paper looks like this:
I have no idea why this happens in your code. I am quite confident that the work on this repo is working as it should - as it has been reproduced by other authors, for example https://github.com/npuljc/RL_control_Nek5000 got the same result when they implemented my DRL strategies in Nek5000.
I think that the most likely sources of erros are either in i) how your OpenFOAM case is set up, how the probes / jets are set up, etc, or ii) how you are coupling the DRL and the CFD.
In either case, this is not an issue with the present repo, this will be strongly dependent on your code and will require detailed analysis / debugging of your code to fix things. This is outside the scope of this repo to help other people set up their own DRL experiments, and I have no time for helping anyways - so closing.
Hi, Mr.Jean, At present, I want to conduct simple DRL training on The OpenFoam platform with the cylinder flow case. But in trying for a long time did not achieve the ideal control effect in your paper. And lift changes dramatically every time the action changes. I have used the control method about speed change in your paper, but this phenomenon still occurs. May I ask how to solve it? My training parameters are:
The lifting drag curve is compared to the baseline as shown below:
动作曲线如图: