Closed guoyangqin closed 2 years ago
Related to #863
There is no functionality to support this per se (indicating the episode ended on timeout is not standardized in Gym, although some environments provide this in the info dict). An easy solution for this problem is to provide episode time in observations as suggested in #863.
Related to #863
There is no functionality to support this per se (indicating the episode ended on timeout is not standardized in Gym, although some environments provide this in the info dict). An easy solution for this problem is to provide episode time in observations as suggested in #863.
Thanks, setting info['TimeLimit.truncated']=True
is an elegant solution that won't complicate the code. Was wondering if you can show me the source code about how stable-baselines processes the input info['TimeLimit.truncated']
(didn't get it by searching)?
Ah sorry, I was referring to the final comment from arrafin in that issue:
as the time feature is sufficient and avoid including additional complexity in the code (it gets a little more complex when using multiple environments)
There is no support for TimeLimit.truncated
in stable-baselines but this would be a good feature for stable-baselines3, given how common it is.
For a longer answer regarding the time feature, you can read https://github.com/araffin/rl-baselines-zoo/issues/79
I see. My bad that I directly jumped to #863 without noticing "to provide episode time in observations". Yes, that is a simple and necessary trick especially when the timeout is intrinsic in the system, as mentioned in one of the two cases in the paper Time Limits in Reinforcement Learning. However, in the second case of the paper where time limit is set just to facilitate learning (e.g. more diverse trajectories), bootstrapping in the last step is mandated.
My case is more of the second one. But it is simpler than those mixed with both env done
and time limit, it just has time limit as its mere termination signal. So I just need a overwritten method by simply dropping * (1. -done)
. Is there any possibility for the user to overwrite some specific methods in stable-baselines to realize it? Such as
The ease of implementing this yourself depends on the algorithm you want to use, as some do not store info
dicts or their information. However for PPO2 you can modify code around this point to gather the infos you want and update the return/discount values and dones
accordingly.
Thanks. I am mainly using AC/ACER, according to your hint, should I modify this method
by removing (1.0 - done_seq[i])
from
I have little experience with ACER but that seems to be to the right direction. Sorry, I can not be of further assistance with this topic, as your guess will probably be more correct than mine :)
It is ok. I have little experience with PPO, so I am trying ACER. Thank you, Miffyli, your comments and quotes are very helpful. I will test it by myself.
Closing as now done in SB3
I am using stable-baselines 2.10.1 to train AC/ACER agents in a custom environment with time limits [0, T] per episode. In the last update per episode, the value function is normally updated by
which treats state S^T as an absorbing state where no value will be incurred thereafter. In the code,
(1. - done)
is used.However, for my limited-time cases, the update is expected to be like this
Since the training terminates not because the terminal state is reached, but because the time is out and V(S^T) has its value, therefore the training is expected to keep bootstraping in this last step.
I skimmed through the source code, and neither found this functionality nor figure out where to rewrite. Was wondering how to enable this?