cellarium-ai / cellarium-ml

Distributed single-cell data analysis.
BSD 3-Clause "New" or "Revised" License
11 stars 2 forks source link

Add a new `ParallelStrategy` that doesn't require gradient synchronization #175

Open ordabayevy opened 4 months ago

ordabayevy commented 4 months ago

At the moment we use DDPStrategy to train models that don't require gradient synchronization (IncrementalPCA, OnePassMeanVarStd). However, ddp strategy requires that your model has parameters which is why we have _dummy_param in these models. This works but it get's a bit inconvenient when the trained model is used as a transform because it becomes and unused parameter and ddp complains about it (this has been fixed in #186 ).

ordabayevy commented 2 months ago

Starting from PyTorch Lightning 2.3 returning None in a training step is not allowed https://github.com/Lightning-AI/pytorch-lightning/pull/19918