Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.47k stars 3.39k forks source link

Use the non-protected LRScheduler import #15587

Closed carmocca closed 1 year ago

carmocca commented 2 years ago

🚀 Feature

Motivation

Avoid protected imports

Pitch

https://github.com/pytorch/pytorch/issues/61232 has been merged to PyTorch which renames _LRScheduler to LRScheduler. They have kept the old class for compatibility, but we should still use the new one.

The task is to add logic like this

LRScheduler = (
    torch.optim.lr_scheduler.LRScheduler
    if _TORCH_GREATER_EQUAL_1_14 else 
    torch.optim.lr_scheduler._LRScheduler
)

to https://github.com/Lightning-AI/lightning/blob/d5003b1c07fda783f651a732c86ad48656be42c1/src/lightning_lite/utilities/types.py#L66 and places that use it

Alternatives

Keep using the protected import


If you enjoy Lightning, check out our other projects! âš¡

cc @borda

qmaruf commented 2 years ago

I am interested to work on this feature. @carmocca What are the things I need to follow?

carmocca commented 2 years ago

I described the work in the top post. We want to use LRScheduler throughout the codebase by importing a small compatibility variable defined in lightning/src/lightning_lite/utilities/types.py

qmaruf commented 2 years ago

Hi @shenoynikhil, i am working on this one.

qmaruf commented 2 years ago

All the changes are here https://github.com/Lightning-AI/lightning/compare/master...qmaruf:lightning:feature-15587

However, some tests are failing because of this issue RuntimeError: torch.distributed is not available. Cannot initialize distributed process group. I am using mac m1 pro.

cc @carmocca

carmocca commented 2 years ago

Thanks! You can go ahead and open a PR

ahmadmustafaanis commented 2 years ago

@carmocca can you also point me to a good first issue on which I can work? I see on most of the issues there are people already working.