Closed stwerner97 closed 2 years ago
Thanks for pointing this out. I made a quick fix for this issue.
The Burgers case may be removed in the future and you are highly discouraged to work on it since Reinforcement learning is not the best choice for this kind of problems (matching reference trajectories). We use this Burgers environment in our paper just to demonstrate the feasibility of using PIMBRL in such environments, but supervised learning or traditional control algorithms are good enough in such cases.
Thanks a lot!
Hi, thanks for open-sourcing your code! 😄
The code in
burgers.py
seems outdated and when runningpython burgers.py
, I get several issues. For example, the moduleNN.RL.core
does not exist anymore. Other issues include arguments being passed, even though the keyword has likely been changed (for example "actor_critic" as opposed to "Actor").I've checked my installation and could successfully execute the Kuramoto-Sivashinsky example.
Thanks!