DLR-RM / stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
https://stable-baselines3.readthedocs.io
MIT License
9.07k stars 1.7k forks source link

[Question] Wrong scaled_action for continuous actions in `_sample_action()`? #1269

Open KiwiXR opened 1 year ago

KiwiXR commented 1 year ago

❓ Question

The following code samples action for an off-policy algorithm. As the comments indicate, the continuous actions obtained in line 395 should have already been scaled by tanh, which puts them in the range (-1, 1). However, in line 399, the action is scaled again, which makes the valid action space even smaller. Is it a potential bug or just my misunderstanding?

https://github.com/DLR-RM/stable-baselines3/blob/5aa6e7d340108c8d109cd608468e8dc1fa33030d/stable_baselines3/common/off_policy_algorithm.py#L388-L399

Below I attach a modified version to demonstrate my idea.

    else:
        # Note: when using continuous actions,
        # we assume that the policy uses tanh to scale the action
        # We use non-deterministic action in the case of SAC, for TD3, it does not matter
        unscaled_action, _ = self.predict(self._last_obs, deterministic=False)
        # unscale in the continuous case, as the prediction is scaled
        if isinstance(self.action_space, gym.spaces.Box):
            unscaled_action = self.policy.unscale_action(unscaled_action)

    # Rescale the action from [low, high] to [-1, 1]
    if isinstance(self.action_space, gym.spaces.Box):
        scaled_action = self.policy.scale_action(unscaled_action)

Checklist

araffin commented 1 year ago

Hello,

As the comments indicate, the continuous actions obtained in line 395 should have already been scaled by tanh, which puts them in the range (-1, 1).

i think the comment is misleading. The output of predict() is in [low, high] (that's why the name is unscaled_action), the unscaling is done only when using squashing (which is the case for all off-policy algorithms): https://github.com/DLR-RM/stable-baselines3/blob/30a19848cef28f88a4cb38667ba74b3963be8f1d/stable_baselines3/common/policies.py#L347-L354

so the scaling afterward do scale the action back to [-1, 1] for storing it in the replay buffer.

I would be happy to receive a PR that update the comment ;)

EDIT: the comment is right (it talks about what is the assumption) but misleading

KiwiXR commented 1 year ago

Thank you for the clear explanation! I am pleased to PR when I come up with a better comment.

AmorWwQ commented 1 year ago

I wonder why we need to store the unscaled action in the replay buffer instead of the final action actually taken in the environment.

araffin commented 1 year ago

I wonder why we need to store the unscaled action in the replay buffer instead of the final action actually taken in the environment.

we need to store the action sampled from the underlying action distribution. For SAC, as it uses a tanh, the scaled/unscaled actions are the same most of the time (when the bounds are symmetric and in [-1, 1]). If you store the unscaled action (between [low, high]) then you will have issues when doing the gradient update (for instance you will compute the likelihood of sampling such action, and if low < -1 or high > 1, then it is zero because SAC only output actions in [-1, 1]).

For TD3/DDPG that don't rely on a probability distribution, you will have issues when you want to add noise to it or when you update the q-values with actions that cannot come from the actor (see https://github.com/vwxyzjn/cleanrl/issues/279 for the kind of issues that you may have).

Of course, you could do the scaling everywhere in the gradient update, but this is bug prone and having everything in [-1, 1] makes things simpler.

wagh311 commented 1 year ago

Hello @araffin ,I'm curious, why use the following code to scale the action? return 2.0 * ((action - low) / (high - low)) - 1.0 In particular, why multiply by 2 and subtract 1?

araffin commented 1 year ago

In particular, why multiply by 2 and subtract 1?

how would you do it otherwise?

wagh311 commented 1 year ago

I think I have understood the reason for the above approach. The above scaling method is based on the min-max normalization, and a linear change is made, so the range of action can be limited to [-1,1].Thank you for your reply.