DLR-RM / rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
https://rl-baselines3-zoo.readthedocs.io
MIT License
1.89k stars 494 forks source link

Stuck at Local Minimum in PPO with CarRacing-v2 Environment #452

Closed bantu-4879 closed 1 month ago

bantu-4879 commented 1 month ago

❓ Question

I've been experimenting with various parameters in the Proximal Policy Optimization (PPO) algorithm within the CarRacing-v2 environment. After extensive testing, I've found a combination of parameters that initially shows promising results and learns relatively fast. However, I've encountered a significant challenge where the learning process appears to stagnate after a certain training stage.

Despite extensive training, the agent seems unable to surpass a particular performance threshold. I suspect that the algorithm may be trapped in a local minimum, but it doesn't seem to be a desirable or acceptable minimum given the potential of the environment.

Request for Assistance: I'm seeking guidance on how to overcome this challenge and help the algorithm escape from the local minimum it's currently stuck in. Any insights, suggestions, or alternative approaches would be greatly appreciated.

Environment and Configuration:

My Work https://github.com/bantu-4879/Atari_Games-Deep_Reinforcement_Learning/tree/main/Notebooks/CarRacing-v2

Checklist