Closed awaelchli closed 2 years ago
Hi @awaelchli, I am interested in this, is this up for the taking? Also could provide some additional in terms of whats expected in the PR ?
@Atharva-Phatak Yes, sure. Once #11617 is merged feel free to work on it. The PR should just integrate the same changes in this file here: https://github.com/Lightning-AI/lightning/blob/master/src/lightning_lite/strategies/launchers/subprocess_script.py
@awaelchli Thanks for the update. I will work on it once #11617 has merged.
It is merged now :tada:
Thanks for letting me know. I will get started.
Hi, quick question, I was seeing the #11617 looks like it added Hydra support for multirun. I believe most changes were made to the launcher (https://github.com/Lightning-AI/lightning/blob/45ca78167efaa98f5e78ca73d79d4e71946db253/src/pytorch_lightning/strategies/launchers/subprocess_script.py) and for this issue, I have to apply the same changes to lightning Lite right ?
@Atharva-Phatak Yes, my idea is that the new functions (e.g. _hydra_subprocess_cmd) can just me moved to Lite, then imported in PL to share the code. Then all is left call the function in the Lite version of the launcher as well, as it was done in the PR.
Proposed refactor
Apply/share changes in #11617 with the Lite implementation.
Motivation
Since both are using the same launcher, the improvements should be integrated for both.
Pitch
Move the
_hydra_subprocess_cmd
functions tolightning_lite.strategies.lauchers.subprocess
, and share the implementation in PL.Additional context
11617 was developed in parallel to standalone Lite efforts, so we didn't have time to adjust.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.