danijar / dreamerv3

Mastering Diverse Domains through World Models
https://danijar.com/dreamerv3
MIT License
1.36k stars 229 forks source link

Confusion about the initial states of RSSM #82

Closed xlnwel closed 1 year ago

xlnwel commented 1 year ago

Hi, thanks for open-sourcing the code.

I notice that, during training, the RSSM states are always initialized from the states of the previous iteration(code). As far as I understand, the training data are randomly sampled from the replay, and the initial states seem not to be transferrable from one iteration to the next. Then why does dreamerv3 initialize the state in this way?

Moreover, I realize that Dreamerv3 learns the initial states, but I'm quite confused about how they get updated. I'm wondering whether the state initialization mentioned above somehow benefits the learnable states? If so, how?

schneimo commented 1 year ago

I notice that, during training, the RSSM states are always initialized from the states of the previous iteration(code). As far as I understand, the training data are randomly sampled from the replay, and the initial states seem not to be transferrable from one iteration to the next. Then why does dreamerv3 initialize the state in this way?

As from my understanding of the paper and code, random trajectories are sampled from the replay buffer and each state of the trajectory is accessed with a queue in line 75 of your linked file. But I am not sure if state shouldn't be set to state = [None] after each call of train_step like it is done at the beginning of training. Otherwise the state is set to the last outcome of the previous trajectory batch for each subsequent call of train_step and I am not sure if this is the correct behaviour. Or was that your inital question?

Moreover, I realize that Dreamerv3 learns the initial states, but I'm quite confused about how they get updated. I'm wondering whether the state initialization mentioned above somehow benefits the learnable states? If so, how?

Regarding your second question: The initial state is state = [None] and is then internally updated by the RSSM. The learning is based on equation 4 of the paper.

danijar commented 1 year ago

Hi, the replay buffer sets is_first = True at the first time step of each replayed sequence so that the RSSM resets itself at the beginning of each batch. The implementation is like that because it would also allow different replay schemes where batches are consecutive for training with truncated backprop through time.