Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.32k stars 3.38k forks source link

During manual training, automatically create corresponding dataloader for different settings (distributed, multi-gpu etc.) #5258

Closed rwbfd closed 3 years ago

rwbfd commented 3 years ago

🚀 Feature

A function that automatically converts the newly created dataloader inside the training loop for different settings, such as multi-GPU, distributed, etc.

Motivation

In reinforcement learning, it is widespread to create new data from the environment. These new training data will be fed into a new dataloader. In fact, in some algorithms, we even need to create dataloader from a batch (the most predominant approach is self-imitation learning). In this situation, the only way is to manually change the dataloader or sampler, which will be very inconvenient.

Pitch

It would be beneficial if there is a hook. As long as the new DataLoader is created, a method will automatically convert it to appropriate dataloaders specified by the training methods.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!