hill-a / stable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
http://stable-baselines.readthedocs.io/
MIT License
4.16k stars 725 forks source link

[question] Enabling agents to keep bootstraping in the last step per episode #1004

Closed guoyangqin closed 2 years ago

guoyangqin commented 4 years ago

I am using stable-baselines 2.10.1 to train AC/ACER agents in a custom environment with time limits [0, T] per episode. In the last update per episode, the value function is normally updated by

V(S^{T-1}) = r + 0

which treats state S^T as an absorbing state where no value will be incurred thereafter. In the code, (1. - done) is used.

def discount_with_dones(rewards, dones, gamma):
    discounted = []
    ret = 0  # Return: discounted reward
    for reward, done in zip(rewards[::-1], dones[::-1]):
        ret = reward + gamma * ret * (1. - done)  # fixed off by one bug
        discounted.append(ret)
    return discounted[::-1]

However, for my limited-time cases, the update is expected to be like this

V(S^{T-1}) = r + gamma*V(S^T)

Since the training terminates not because the terminal state is reached, but because the time is out and V(S^T) has its value, therefore the training is expected to keep bootstraping in this last step.

I skimmed through the source code, and neither found this functionality nor figure out where to rewrite. Was wondering how to enable this?

Miffyli commented 4 years ago

Related to #863

There is no functionality to support this per se (indicating the episode ended on timeout is not standardized in Gym, although some environments provide this in the info dict). An easy solution for this problem is to provide episode time in observations as suggested in #863.

guoyangqin commented 4 years ago

Related to #863

There is no functionality to support this per se (indicating the episode ended on timeout is not standardized in Gym, although some environments provide this in the info dict). An easy solution for this problem is to provide episode time in observations as suggested in #863.

Thanks, setting info['TimeLimit.truncated']=True is an elegant solution that won't complicate the code. Was wondering if you can show me the source code about how stable-baselines processes the input info['TimeLimit.truncated'] (didn't get it by searching)?

Miffyli commented 4 years ago

Ah sorry, I was referring to the final comment from arrafin in that issue:

as the time feature is sufficient and avoid including additional complexity in the code (it gets a little more complex when using multiple environments)

There is no support for TimeLimit.truncated in stable-baselines but this would be a good feature for stable-baselines3, given how common it is.

araffin commented 4 years ago

For a longer answer regarding the time feature, you can read https://github.com/araffin/rl-baselines-zoo/issues/79

guoyangqin commented 4 years ago

I see. My bad that I directly jumped to #863 without noticing "to provide episode time in observations". Yes, that is a simple and necessary trick especially when the timeout is intrinsic in the system, as mentioned in one of the two cases in the paper Time Limits in Reinforcement Learning. However, in the second case of the paper where time limit is set just to facilitate learning (e.g. more diverse trajectories), bootstrapping in the last step is mandated.

My case is more of the second one. But it is simpler than those mixed with both env done and time limit, it just has time limit as its mere termination signal. So I just need a overwritten method by simply dropping * (1. -done). Is there any possibility for the user to overwrite some specific methods in stable-baselines to realize it? Such as

Miffyli commented 4 years ago

The ease of implementing this yourself depends on the algorithm you want to use, as some do not store info dicts or their information. However for PPO2 you can modify code around this point to gather the infos you want and update the return/discount values and dones accordingly.

guoyangqin commented 4 years ago

Thanks. I am mainly using AC/ACER, according to your hint, should I modify this method

https://github.com/hill-a/stable-baselines/blob/3d115a3f1f5a755dc92b803a1f5499c24158fb21/stable_baselines/acer/acer_simple.py#L50-L52

by removing (1.0 - done_seq[i]) from

https://github.com/hill-a/stable-baselines/blob/3d115a3f1f5a755dc92b803a1f5499c24158fb21/stable_baselines/acer/acer_simple.py#L74

Miffyli commented 4 years ago

I have little experience with ACER but that seems to be to the right direction. Sorry, I can not be of further assistance with this topic, as your guess will probably be more correct than mine :)

guoyangqin commented 4 years ago

It is ok. I have little experience with PPO, so I am trying ACER. Thank you, Miffyli, your comments and quotes are very helpful. I will test it by myself.

araffin commented 2 years ago

Closing as now done in SB3