DLR-RM / rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
https://rl-baselines3-zoo.readthedocs.io
MIT License
1.92k stars 498 forks source link

[Feature Request] Support Stochastic Weight Averaging (SWA) for improved stability #321

Open pchalasani opened 1 year ago

pchalasani commented 1 year ago

🚀 Feature

Stochastic Weight Averaging (SWA) is a recently proposed technique can potentially help improve training stability in DRL. There is now a new implementation in torchcontrib. Quoting/paraphrasing from their page:

a simple procedure that improves generalization in deep learning over Stochastic Gradient Descent (SGD) at no additional cost, and can be used as a drop-in replacement for any other optimizer in PyTorch. SWA has a wide range of applications and features, [...] including [...] improve the stability of training as well as the final average rewards of policy-gradient methods in deep reinforcement learning.

See the PyTorch SWA page for more.

Motivation

SWA might help improve training stability as well as final reward in some DRL scenarios. It may also alleviate sensitivity to random-seed initialization.

Pitch

See above :)

Alternatives

No response

Additional context

See the PyTorch SWA page for more.

Checklist

araffin commented 1 year ago

Hello,

can potentially help improve training stability in DRL

do you have experimental results to back this claim?

In the paper linked in the blog post, results are on A2C/DDPG only (which have usually weaker results compared to PPO/TD3/SAC) and they used only 3 random seeds, which is not enough to account for noise in the results.

Torch contrib is also now archived and didn't receive any update for almost 3 years (https://github.com/pytorch/contrib).

EDIT: SWA seems to be directly in pytorch now https://pytorch.org/docs/stable/optim.html#stochastic-weight-averaging

pchalasani commented 1 year ago

Thanks, I did not know SWA is in main PyTorch. I will look into it. As for empirical evidence, I'll continue experimenting and report back.