Open tchaton opened 2 years ago
@carmocca for thoughts.
I don't see the advantage over our current system of showing deprecation messages.
@carmocca I believe the trace is helpful and users are usually complaining about the long lists of warnings without a clear explanation about the root cause.
As discussed online, I would give the option to include the stacktrace at the time of a deprecation.
This could be done externally with this addition:
import traceback
import warnings
import sys
def showwarning_with_deprecation_traceback(message, category, filename, lineno, file=None, line=None):
log = file if hasattr(file, 'write') else sys.stderr
if issubclass(category, DeprecationWarning):
stack = traceback.extract_stack()[:-5] # `rank_zero_deprecation` will add 5 extra levels
traceback.print_list(stack, file=log)
log.write(warnings.formatwarning(message, category, filename, lineno, line))
warnings.showwarning = showwarning_with_deprecation_traceback
example:
rank_zero_warn(
File "/home/carmocca/git/lightning/examples/pl_bug_report/bug_report_model.py", line 82, in <module>
run()
File "/home/carmocca/git/lightning/examples/pl_bug_report/bug_report_model.py", line 61, in run
trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 700, in fit
self._call_and_handle_interrupt(
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 654, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 741, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1166, in _run
results = self._run_stage()
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
return self._run_train()
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1282, in _run_train
self.fit_loop.run()
File "/home/carmocca/git/lightning/src/pytorch_lightning/loops/loop.py", line 195, in run
self.on_run_start(*args, **kwargs)
File "/home/carmocca/git/lightning/src/pytorch_lightning/loops/fit_loop.py", line 210, in on_run_start
self.trainer.reset_train_dataloader(self.trainer.lightning_module)
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/trainer.py", line 1832, in reset_train_dataloader
apply_to_collection(loaders, DataLoader, self._data_connector._worker_check, "train_dataloader")
File "/home/carmocca/git/lightning/src/pytorch_lightning/utilities/apply_func.py", line 100, in apply_to_collection
return function(data, *args, **kwargs)
File "/home/carmocca/git/lightning/src/pytorch_lightning/trainer/connectors/data_connector.py", line 226, in _worker_check
rank_zero_deprecation("foobar")
/home/carmocca/git/lightning/src/pytorch_lightning/trainer/connectors/data_connector.py:226: LightningDeprecationWarning: foobar
rank_zero_deprecation("foobar")
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions - the Lightning Team!
🚀 Feature
Motivation
After reading through PyTorch codebase, I came across this code example: https://github.com/pytorch/pytorch/blob/538647fe1fb94b7822ea3b8bbbd6901961431d60/torch/fx/_compatibility.py I believe such logic would provide value.
Furthermore, I believe we could create a mechanism to better inform users about depreciation.
Here is a proposed mechanism:
On those calls
Find all the non-built classes and activate a compatibility tracer mechanism so we can capture all deprecated calls from external classes to our own codebase.
Example:
Pitch
Alternatives
Additional context
If you enjoy Lightning, check out our other projects! âš¡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @tchaton @justusschock @awaelchli @borda @rohitgr7 @akihironitta