Closed rwbfd closed 3 years ago
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!
🚀 Feature
A function that automatically converts the newly created dataloader inside the training loop for different settings, such as multi-GPU, distributed, etc.
Motivation
In reinforcement learning, it is widespread to create new data from the environment. These new training data will be fed into a new dataloader. In fact, in some algorithms, we even need to create dataloader from a batch (the most predominant approach is self-imitation learning). In this situation, the only way is to manually change the dataloader or sampler, which will be very inconvenient.
Pitch
It would be beneficial if there is a hook. As long as the new DataLoader is created, a method will automatically convert it to appropriate dataloaders specified by the training methods.