Hi Danijar,
do I understand correct that this line should have batch = 50 to to have same hyperparameters as in the paper? I am asking because I want to investigate why my own PyTorch implementation is slower.
Yep, the defaults in the repo here are tuned for getting quick results (and top performance is still high). The hparams in the paper are a bit different.
https://github.com/danijar/dreamerv2/blob/912ec5da79467b22917cce683c776f034850f91d/dreamerv2/configs.yaml#L24
Hi Danijar, do I understand correct that this line should have batch = 50 to to have same hyperparameters as in the paper? I am asking because I want to investigate why my own PyTorch implementation is slower.