SforAiDl / genrl

A PyTorch reinforcement learning library for generalizable and reproducible algorithm implementations with an aim to improve accessibility in RL
https://genrl.readthedocs.io
MIT License
404 stars 59 forks source link

RPC Communication in Distributed RL Training #303

Open Sharad24 opened 3 years ago

Sharad24 commented 3 years ago

There's three ways that I can think of having distributed training:

  1. Use of Pytorch's Distributed Training infrastructure. Would require establishing communication protocols specific to the case of Deep RL. This would all be in Python (most likely) unless we find a way around.
  2. Use of Reverb
    • Use TF based Datasets (@threewisemonkeys-as )
    • Pytorch wrapper for the conversion of NumPy arrays, etc (that are received) (Short-term, up for grabs)
threewisemonkeys-as commented 3 years ago

I agree that we should target 2 to begin with. We will still need python multiprocessing over here to run actors and learners seperately right?

As for the structure and fitting it into the rest of the library I was thinking of having DistributedOnPolicyTrainer and DistributedOffPolicyTrainer which will act as the main process and spawn the multile actors while maintaining and updating the central weights. In this case, the agent would only need to implement update_params (to be called in the main process) and select_action (to be called for each actor). The trajectories and weights would be transported through reverb.

I am holding off on #233 since a reverb buffer wrapper would heavily depend on the structure we go with. Plus it is not really useful in the non-distributed case.

github-actions[bot] commented 3 years ago

Stale issue message