Closed Pranav-India closed 3 years ago
Hi @philtabor
I see that in line
a_t += discount*(reward_arr[k] + self.gamma*values[k+1] *(1-int(dones_arr[k])) - values[k])
k+1 and k are suppose to be next_state_value and current_state_value respectively. But while making the batches
def generate_batches(self):
n_states = len(self.states)
batch_start = np.arange(0, n_states, self.batch_size)
indices = np.arange(n_states, dtype=np.int64)
np.random.shuffle(indices)
batches = [indices[i:i+self.batch_size] for i in batch_start]
we have used shuffle so now the batch which we are receiving need not be in the proper order. that is the reason of my confusion.
Hi @Pranav-India
As far as I can see everything is correct. Yes, it's true that while generating batches we use shuffle, but it is only used to shuffle indices. So state_arr, action_arr, old_prob_arr, vals_arr, reward_arr, dones_arr
are returned in the correct order.
Then, when we compute a_t += discount*(reward_arr[k] + self.gamma*values[k+1] *(1-int(dones_arr[k])) - values[k])
we iterate through indices of original arrays (we don't use batches here) (for t in range(len(reward_arr)-1):
and for k in range(t, len(reward_arr)-1):
) and get data using t
or k
here a_t += discount*(reward_arr[k] + self.gamma*values[k+1]*\ (1-int(dones_arr[k])) - values[k])
. So in m view, k+1 and k are next_state_value and current_state_value indices respectively.
Please, correct me if I missed something. Thanks in advance!
Hi @NonameUntitled I went through it again it seems you are correct. Function returns state_arr, action_arr, old_prob_arr, vals_arr, reward_arr, dones_arr
in correct order and only batches
has the shuffled indices so calculation must be right..Thank you for the response.
Hi @philtabor
Thank you for the great tutorials ..I am following "ppo_torch.py"(code link) this code and it's tutorial and one thing that I did not understand why we are not storing the next_state and using that to generate a critic value , which will be then used for calculating the loss function? don't we need the next state in ppo, am I missing something?