Open KiwiXR opened 1 year ago
Hello,
As the comments indicate, the continuous actions obtained in line 395 should have already been scaled by tanh, which puts them in the range (-1, 1).
i think the comment is misleading.
The output of predict()
is in [low, high] (that's why the name is unscaled_action
), the unscaling is done only when using squashing (which is the case for all off-policy algorithms):
https://github.com/DLR-RM/stable-baselines3/blob/30a19848cef28f88a4cb38667ba74b3963be8f1d/stable_baselines3/common/policies.py#L347-L354
so the scaling afterward do scale the action back to [-1, 1] for storing it in the replay buffer.
I would be happy to receive a PR that update the comment ;)
EDIT: the comment is right (it talks about what is the assumption) but misleading
Thank you for the clear explanation! I am pleased to PR when I come up with a better comment.
I wonder why we need to store the unscaled action in the replay buffer instead of the final action actually taken in the environment.
I wonder why we need to store the unscaled action in the replay buffer instead of the final action actually taken in the environment.
we need to store the action sampled from the underlying action distribution. For SAC, as it uses a tanh, the scaled/unscaled actions are the same most of the time (when the bounds are symmetric and in [-1, 1]). If you store the unscaled action (between [low, high]) then you will have issues when doing the gradient update (for instance you will compute the likelihood of sampling such action, and if low < -1 or high > 1, then it is zero because SAC only output actions in [-1, 1]).
For TD3/DDPG that don't rely on a probability distribution, you will have issues when you want to add noise to it or when you update the q-values with actions that cannot come from the actor (see https://github.com/vwxyzjn/cleanrl/issues/279 for the kind of issues that you may have).
Of course, you could do the scaling everywhere in the gradient update, but this is bug prone and having everything in [-1, 1] makes things simpler.
Hello @araffin ,I'm curious, why use the following code to scale the action?
return 2.0 * ((action - low) / (high - low)) - 1.0
In particular, why multiply by 2 and subtract 1?
In particular, why multiply by 2 and subtract 1?
how would you do it otherwise?
I think I have understood the reason for the above approach. The above scaling method is based on the min-max normalization, and a linear change is made, so the range of action can be limited to [-1,1].Thank you for your reply.
❓ Question
The following code samples action for an off-policy algorithm. As the comments indicate, the continuous actions obtained in line 395 should have already been scaled by tanh, which puts them in the range (-1, 1). However, in line 399, the action is scaled again, which makes the valid action space even smaller. Is it a potential bug or just my misunderstanding?
https://github.com/DLR-RM/stable-baselines3/blob/5aa6e7d340108c8d109cd608468e8dc1fa33030d/stable_baselines3/common/off_policy_algorithm.py#L388-L399
Below I attach a modified version to demonstrate my idea.
Checklist