Implementation of on policy algorithms would need an online buffer that holds the transitions, calculates the generalized advantage returns and then samples batches during training.
Or, should we leave this part on the agent side and have the agent calculate the returns.
This is needed because the current simple buffer samples random batches so we need additional features for on-policy learning.
Implementation of on policy algorithms would need an online buffer that holds the transitions, calculates the generalized advantage returns and then samples batches during training.
Or, should we leave this part on the agent side and have the agent calculate the returns.
This is needed because the current simple buffer samples random batches so we need additional features for on-policy learning.