Closed kaushikb11 closed 1 year ago
I agree with this.
The previous syntax gpus=0 allowed the expression of "I don't want to use any GPUS" which was interpreted as "I want to stick to the CPU". However, with the new syntax the combination accelerator="gpu", devices=0
is contradictory. Falling back to CPU would be incorrect as GPU was specifically requested.
For this edge case, consistency with the xpus notation cannot be achieved, and this is perfectly fine.
Closing, no longer applies. We are raising an error consistently for all accelerators now: https://github.com/Lightning-AI/lightning/blob/97020bf8d7a88ca5195534b8585a5ef53f1ce6cb/src/lightning/pytorch/trainer/connectors/accelerator_connector.py#L327-L336
Motivation
After #12410, we don't fall back to
CPUAccelerator
when the user doesIt is not consistent with the below behavior
Pitch
We need to have a consistent behavior for the accelerator and devices API, and the devices specific flags.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @tchaton @justusschock @awaelchli @borda @kaushikb11 @rohitgr7 @akihironitta