[x] I have searched through the issue tracker for duplicates
[x] I have mentioned version numbers, operating system and environment, where applicable:
I have noticed that in the implementation of the PPOPolicy, the computation of the old log probabilities logp_old is performed without using minibatch:
with torch.no_grad():
batch.logp_old = self(batch).dist.log_prob(batch.act)
This makes this algorithm unusable in situations where the batch is too large, with no possibility of controlling it via batch_size.
I simply suggest to add support for minibatch:
logp_old = []
with torch.no_grad():
for minibatch in batch.split(self._batch, shuffle=False, merge_last=True):
logp_old.append(self(minibatch).dist.log_prob(minibatch.act))
batch.logp_old = torch.cat(logp_old, dim=0).flatten()
I have noticed that in the implementation of the PPOPolicy, the computation of the old log probabilities
logp_old
is performed without using minibatch:This makes this algorithm unusable in situations where the batch is too large, with no possibility of controlling it via batch_size. I simply suggest to add support for minibatch:
The version of Tianshou that I'm using is 1.0.0.