keiohta / tf2rl

TensorFlow2 Reinforcement Learning
MIT License
461 stars 104 forks source link

Slow sampling from expert dataset in IRL traning loop #126

Closed HesNobi closed 3 years ago

HesNobi commented 3 years ago

Hi. After implementing pre-recorded expert data with the size of 1e6. I have realized that np.random.choice with replace=Fasle is extremely slow to the point of unusable. (batch size 100)

I am wondering if it can be replaced with something faster.

Thanks for the great project.

  # Do not allow duplication!!!
  indices = np.random.choice(
      self._random_range, self._irl.batch_size, replace=False)
  self._irl.train(

P.S. I am using the dataset from Berkeley's D4RL project.

Screenshot from 2021-06-02 19-37-23 Screenshot from 2021-06-02 19-36-55

ymd-h commented 3 years ago

Hi, @HesNobi @keiohta

This is my quick test for random pickup.

The result shows;

  1. Non-duplicated pickup is slow.
  2. If we allow duplication, the pickup becomes about 1000 times faster.
  3. If we can't allow duplication, but data size is multiplication of batch size and stratified sampling is allowed, the non-duplicated pickup becomes about 1000 times faster, too.
SS 2021-06-03 8 03 24
keiohta commented 3 years ago

Hi @ymd-h , thanks for measuring the time for random sampling!

I believe allowing duplication should be okay considering

If you two do not have opposite opinion, then I'll change the corresponding part.

HesNobi commented 3 years ago

Hi, I've done some digging and apparently using random.sample is very much faster thannp.random.choice. Thanks for keeping this project alive.

Screenshot from 2021-06-03 09-29-38

HesNobi commented 3 years ago

I am going to close this issue since it has been acknowledged properly.

majiang commented 2 years ago

I've found this via @ymd-h 's blog article. Do you know if there's numpy's official resource about this? I'd open an issue or PR if not.

ymd-h commented 2 years ago

I did additional investigation (after PR merged).

Unlike the legacy free function (aka. np.random.choice), the recommended generator object method (aka. np.random.Generator.choice) has heuristic algorithm.

https://github.com/numpy/numpy/blob/410a89ef04a2d3c50dd2dba2ad403c872c3745ac/numpy/random/_generator.pyx#L795-L837

In my opinion, we don't need to open a new issue at NumPy repository.

To determine the fastest method for us, we need additional study. (However, I think the study is low priority as long as the current implementation is sufficient.)

If anyone have problem with the current implementation, please feel free to tell us.

Ref: Non-repetitive random number in numpy | Stack Overflow