twni2016 / pomdp-baselines

Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
https://sites.google.com/view/pomdp-baselines
MIT License
300 stars 40 forks source link

introduce recurrent sac-discrete #1

Closed twni2016 closed 2 years ago

twni2016 commented 2 years ago

This PR introduces recurrent SAC-discrete algorithm for POMDPs with discrete action space. The code is heavily based on the SAC-discrete open-sourced code https://github.com/ku2482/sac-discrete.pytorch/blob/master/sacd/agent/sacd.py and the SAC-discrete paper https://arxiv.org/abs/1910.07207

We provide two sanity checks on classic gym discrete control environments: CartPole-v0 and LunarLander-v2. The commands for running Markovian and recurrent SAC-discrete algorithms are:

# CartPole
python3 policies/main.py --cfg configs/pomdp/cartpole/f/mlp.yml --target_entropy 0.7 --cuda -1
# CartPole-V
python3 policies/main.py --cfg configs/pomdp/cartpole/v/rnn.yml --target_entropy 0.7 --cuda 0
# Lunalander
python3 policies/main.py --cfg configs/pomdp/lunalander/f/mlp.yml --target_entropy 0.7 --cuda -1
# Lunalander-V
python3 policies/main.py --cfg configs/pomdp/lunalander/v/rnn.yml --target_entropy 0.5 --cuda 0

where target_entropy sets the ratio of target entropy: ratio * log(|A|).

Screen Shot 2022-03-01 at 2 33 15 AM Screen Shot 2022-03-01 at 2 18 30 AM Screen Shot 2022-03-01 at 10 29 56 AM Screen Shot 2022-03-01 at 1 09 06 PM
hai-h-nguyen commented 2 years ago

Hi, I ran this command: python3 policies/main.py --cfg configs/pomdp/cartpole/f/mlp.yml --target_entropy 0.7 --cuda -1 Even it seems to solve the domain, rl_loss/alpha, rl_loss/policy_loss, rl_loss/qf1_loss, l_loss/qf2_loss increase/decrease very quickly and do not seem to stop. Probably something weird going on here?

twni2016 commented 2 years ago

Hi,

Yes, I did observe Markovian SAC-Discrete is unstable for Cartpole across seeds. You may try disable auto-tuning the alpha, and grid search over fixed alpha, using

--noautomatic_entropy_tuning --entropy_alpha 0.1

I did not have much insight on it..

hai-h-nguyen commented 1 year ago

Hi, when running recurrent SACD on my domains, there is often a time when the agent doesn't seem to change much (the learning curve is just straight - 0-15k timesteps like the figure below). Do you have any insight? test

twni2016 commented 1 year ago

It seems that your task has sparse reward. I guess the entropy is high at the early learning stage and through training, the entropy decreases to a threshold where the agent can exploit its "optimal" behavior to receive some positive rewards.

hai-h-nguyen commented 1 year ago

Yeah, that's right. However, the agent does experience rewards during the period 0-10k, but the policy gradient doesn't seem to be large. And the policy during evaluation didn't change much, often not getting any success at all, even though it's not hard that sparse to get a positive reward.

RobertMcCarthy97 commented 1 year ago

@hai-h-nguyen Did you ever find a solution to these issues? I am experiencing somewhat similar behaviour.

hai-h-nguyen commented 1 year ago

Hi @RobertMcCarthy97 , it might help to start alpha at a smaller value, like 0.01, than the starting value in the code (1.0). That is to make the agent explores less initially.