Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.47k stars 3.39k forks source link

Improve compatibility messaging for users. #13823

Open tchaton opened 2 years ago

tchaton commented 2 years ago

🚀 Feature

Motivation

After reading through PyTorch codebase, I came across this code example: https://github.com/pytorch/pytorch/blob/538647fe1fb94b7822ea3b8bbbd6901961431d60/torch/fx/_compatibility.py I believe such logic would provide value.

Furthermore, I believe we could create a mechanism to better inform users about depreciation.

Here is a proposed mechanism:

On those calls

trainer = Trainer(...)
trainer.X(...)

Find all the non-built classes and activate a compatibility tracer mechanism so we can capture all deprecated calls from external classes to our own codebase.

Example:

# pytorch_lightning/strategies/dpp.py

class DDPStrategy():

    @depreceated("version_")
    def method_X(self, ...)
        ...
# users/..../custom_strategies/custom_dpp.py

class CustomDDPStrategy(CustomDDPStrategy):

    def method_X(self, ...):
        super().method_X(...)

trainer = Trainer(strategy=CustomDDPStrategy(...))
Warning Reports:

`CustomDDPStrategy.method_X` calls `DDPStrategy.method_X` which is depreceated in 1.X and remove in 1.Y. Use method_Z instead in `CustomDDPStrategy`.
...

Pitch

Alternatives

Additional context


If you enjoy Lightning, check out our other projects! âš¡

cc @tchaton @justusschock @awaelchli @borda @rohitgr7 @akihironitta

tchaton commented 2 years ago

@carmocca for thoughts.

carmocca commented 2 years ago

I don't see the advantage over our current system of showing deprecation messages.

tchaton commented 2 years ago

@carmocca I believe the trace is helpful and users are usually complaining about the long lists of warnings without a clear explanation about the root cause.

carmocca commented 2 years ago

As discussed online, I would give the option to include the stacktrace at the time of a deprecation.

This could be done externally with this addition:

import traceback
import warnings
import sys

def showwarning_with_deprecation_traceback(message, category, filename, lineno, file=None, line=None):
    log = file if hasattr(file, 'write') else sys.stderr
    if issubclass(category, DeprecationWarning):
        stack = traceback.extract_stack()[:-5]  # `rank_zero_deprecation` will add 5 extra levels
        traceback.print_list(stack, file=log)
    log.write(warnings.formatwarning(message, category, filename, lineno, line))

warnings.showwarning = showwarning_with_deprecation_traceback

example:

  rank_zero_warn(
  File "/home/carmocca/git/lightning/examples/pl_bug_report/bug_report_model.py", line 82, in <module>
    run()
  File "/home/carmocca/git/lightning/examples/pl_bug_report/bug_report_model.py", line 61, in run
    trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 700, in fit
    self._call_and_handle_interrupt(
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 654, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 741, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1166, in _run
    results = self._run_stage()
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
    return self._run_train()
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1282, in _run_train
    self.fit_loop.run()
  File "/home/carmocca/git/lightning/src/pytorch_lightning/loops/loop.py", line 195, in run
    self.on_run_start(*args, **kwargs)
  File "/home/carmocca/git/lightning/src/pytorch_lightning/loops/fit_loop.py", line 210, in on_run_start
    self.trainer.reset_train_dataloader(self.trainer.lightning_module)
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1832, in reset_train_dataloader
    apply_to_collection(loaders, DataLoader, self._data_connector._worker_check, "train_dataloader")
  File "/home/carmocca/git/lightning/src/pytorch_lightning/utilities/apply_func.py", line 100, in apply_to_collection
    return function(data, *args, **kwargs)
  File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/connectors/data_connector.py", line 226, in _worker_check
    rank_zero_deprecation("foobar")
/home/carmocca/git/lightning/src/pytorch_lightning/trainer/connectors/data_connector.py:226: LightningDeprecationWarning: foobar
  rank_zero_deprecation("foobar")
stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions - the Lightning Team!