The reason why this limitation was set is explained in #4006.
This limitation was set in place, to secure that the user had a field self.batch_size that we could alter.
To remove this limitation we instead replaces the dataloader with a newly instantiated dataloader with altered batch size.
If you enjoy Lightning, check out our other projects! âš¡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @borda @justusschock @awaelchli @ninginthecloud @rohitgr7 @otaj @akihironitta
🚀 Feature
This was first suggested/reported in #4000 and addressed in #4006 but then reverted in #4040 due to some GPU testing failure.
Currently, batch size scaling cannot be used with dataloaders passed directly to
trainer.fit()
as exception is raised at: https://github.com/PyTorchLightning/pytorch-lightning/blob/fe34bf2a653ebd50e6a3a00be829e3611f820c3c/pytorch_lightning/tuner/batch_size_scaling.py#L56-L58The reason why this limitation was set is explained in #4006.
Motivation
Pitch
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! âš¡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @borda @justusschock @awaelchli @ninginthecloud @rohitgr7 @otaj @akihironitta