Closed awaelchli closed 1 year ago
Hey @awaelchli,
I know there is a strong push to remove the connectors to a minimal amount and I don't like this effort. @williamFalcon introduced the connectors in the first hand to make the Trainer approachable to new readers and contributors. The goal was to make the highest layer of Lightning the cleanest possible.
IMO, the Trainer code right now is getting more complex than it used to be: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L447 and I have seen tweets about Lightning becoming unreadable.
I would prefer for us to come with a better approach to organise the code instead of dumping everything on the Trainer class and making it un-readable.
Proposed refactoring or deprecation
Reduce the number of connectors the Trainer relies on to only three core ones:
Motivation
As part of the Lightning API audit led by @ananthsub + co., we proposed already several simplifications and code quality improvements to connectors in #7493, #7654, #9778, #10161, #10119, #10108, #10110 etc. There are still a few connectors that are problematic for several reasons.
These three properties make most connectors a burden to maintain as they just obscure the fact that Trainer remains a too powerful class.
Pitch
Remove (refactor away) all connectors except the core ones:
We (@awaelchli @daniellepintz + co) believe that the fact they have enough complexity and encapsulate responsibility warrants their existence as standalone classes. Hence, we formulate these goals:
Additional context
There are a great many similarities between the "DataLoadingMixin" and the DataConnector. As the "DataLoadingMixin" is not a true mixin and we are aiming at removing the "mixins" from the Trainer completely, the DataConnector will be a natural choice for where this logic can go.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @rohitgr7 @kaushikb11 @ninginthecloud