DanielTakeshi / Algorithmic-HRI

1 stars 0 forks source link

How to get human experience replay #5

Closed DanielTakeshi closed 7 years ago

DanielTakeshi commented 7 years ago

Doing human experience replay the naive way (as in, making a separate numpy array and loading them in, then combining with the built-in dataset in deep_q_rl) means the code runs possibly several orders of magnitude slower. The built-in replay memory has a size of 1 million, and my data is "only" on the order of 10k so there's non reason why my version should be that slow. My guess is that it has something to do with memory issues, if I decrease my human experience replay data by a factor of 10, runtime increases by a factor of 10.

So let's instead figure out how to get the dataset built into the normal experience replay in deep_q_rl.

DanielTakeshi commented 7 years ago

NVM, likely not getting to this at all.