ray-project / ray_lightning

Pytorch Lightning Distributed Accelerators using Ray
Apache License 2.0
211 stars 34 forks source link

Bump pytorch-lightning from 1.5.9 to 1.6.5 #183

Closed dependabot[bot] closed 2 years ago

dependabot[bot] commented 2 years ago

Bumps pytorch-lightning from 1.5.9 to 1.6.5.

Release notes

Sourced from pytorch-lightning's releases.

PyTorch Lightning 1.6.5: Standard patch release

[1.6.5] - 2022-07-13

Fixed

  • Fixed estimated_stepping_batches requiring distributed comms in configure_optimizers for the DeepSpeedStrategy (#13350)
  • Fixed bug with Python version check that prevented use with development versions of Python (#13420)
  • The loops now call .set_epoch() also on batch samplers if the dataloader has one wrapped in a distributed sampler (#13396)
  • Fixed the restoration of log step during restart (#13467)

Contributors

@​adamjstewart @​akihironitta @​awaelchli @​Borda @​martinosorb @​rohitgr7 @​SeanNaren

PyTorch Lightning 1.6.4: Standard patch release

[1.6.4] - 2022-06-01

Added

  • Added all DDP params to be exposed through hpu parallel strategy (#13067)

Changed

  • Keep torch.backends.cudnn.benchmark=False by default (unlike in v1.6.{0-4}) after speed and memory problems depending on the data used. Please consider tuning Trainer(benchmark) manually. (#13154)
  • Prevent modification of torch.backends.cudnn.benchmark when Trainer(benchmark=...) is not set (#13154)

Fixed

  • Fixed an issue causing zero-division error for empty dataloaders (#12885)
  • Fixed mismatching default values for the types of some arguments in the DeepSpeed and Fully-Sharded strategies which made the CLI unable to use them (#12989)
  • Avoid redundant callback restore warning while tuning (#13026)
  • Fixed Trainer(precision=64) during evaluation which now uses the wrapped precision module (#12983)
  • Fixed an issue to use wrapped LightningModule for evaluation during trainer.fit for BaguaStrategy (#12983)
  • Fixed an issue wrt unnecessary usage of habana mixed precision package for fp32 types (#13028)
  • Fixed the number of references of LightningModule so it can be deleted (#12897)
  • Fixed materialize_module setting a module's child recursively (#12870)
  • Fixed issue where the CLI could not pass a Profiler to the Trainer (#13084)
  • Fixed torchelastic detection with non-distributed installations (#13142)
  • Fixed logging's step values when multiple dataloaders are used during evaluation (#12184)
  • Fixed epoch logging on train epoch end (#13025)
  • Fixed DDPStrategy and DDPSpawnStrategy to initialize optimizers only after moving the module to the device (#11952)

Contributors

@​akihironitta @​ananthsub @​ar90n @​awaelchli @​Borda @​carmocca @​dependabot @​jerome-habana @​mads-oestergaard @​otaj @​rohitgr7

PyTorch Lightning 1.6.3: Standard patch release

[1.6.3] - 2022-05-03

Fixed

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.6.5] - 2022-07-12

Fixed

  • Fixed estimated_stepping_batches requiring distributed comms in configure_optimizers for the DeepSpeedStrategy (#13350)
  • Fixed bug with Python version check that prevented use with development versions of Python (#13420)
  • The loops now call .set_epoch() also on batch samplers if the dataloader has one wrapped in a distributed sampler (#13396)
  • Fixed the restoration of log step during restart (#13467)

[1.6.4] - 2022-06-01

Added

  • Added all DDP params to be exposed through hpu parallel strategy (#13067)

Changed

  • Keep torch.backends.cudnn.benchmark=False by default (unlike in v1.6.{0-4}) after speed and memory problems depending on the data used. Please consider tuning Trainer(benchmark) manually. (#13154)
  • Prevent modification of torch.backends.cudnn.benchmark when Trainer(benchmark=...) is not set (#13154)

Fixed

  • Fixed an issue causing zero-division error for empty dataloaders (#12885)
  • Fixed mismatching default values for the types of some arguments in the DeepSpeed and Fully-Sharded strategies which made the CLI unable to use them (#12989)
  • Avoid redundant callback restore warning while tuning (#13026)
  • Fixed Trainer(precision=64) during evaluation which now uses the wrapped precision module (#12983)
  • Fixed an issue to use wrapped LightningModule for evaluation during trainer.fit for BaguaStrategy (#12983)
  • Fixed an issue wrt unnecessary usage of habana mixed precision package for fp32 types (#13028)
  • Fixed the number of references of LightningModule so it can be deleted (#12897)
  • Fixed materialize_module setting a module's child recursively (#12870)
  • Fixed issue where the CLI could not pass a Profiler to the Trainer (#13084)
  • Fixed torchelastic detection with non-distributed installations (#13142)
  • Fixed logging's step values when multiple dataloaders are used during evaluation (#12184)
  • Fixed epoch logging on train epoch end (#13025)
  • Fixed DDPStrategy and DDPSpawnStrategy to initialize optimizers only after moving the module to the device (#11952)

[1.6.3] - 2022-05-03

Fixed

  • Use only a single instance of rich.console.Console throughout codebase (#12886)
  • Fixed an issue to ensure all the checkpoint states are saved in a common filepath with DeepspeedStrategy (#12887)
  • Fixed trainer.logger deprecation message (#12671)
  • Fixed an issue where sharded grad scaler is passed in when using BF16 with the ShardedStrategy (#12915)
  • Fixed an issue wrt recursive invocation of DDP configuration in hpu parallel plugin (#12912)
  • Fixed printing of ragged dictionaries in Trainer.validate and Trainer.test (#12857)
  • Fixed threading support for legacy loading of checkpoints (#12814)
  • Fixed pickling of KFoldLoop (#12441)

... (truncated)

Commits
  • ff53616 Weekly patch release v1.6.5 (#13481)
  • 74b1317 Update __version__ field (#13200)
  • a5f82f5 Fix initialization of optimizers in DDP Strategy (#11952)
  • f89b181 Fix epoch logging on train epoch end (#13025)
  • 902774a Specify Trainer(benchmark=False) in parity benchmarks (#13182)
  • bd50b26 Fix logging's step values when multiple dataloaders are used during evaluatio...
  • 29b9963 Fix not running test codes (#13089)
  • 2acff1c Avoid changing the current cudnn.benchmark value (#13154)
  • 3c06cd8 Revert "Update deepspeed requirement from <0.6.0 to <0.7.0 in /requirements (...
  • 7e52126 Fix standalone test collection (#13177)
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 2 years ago

Superseded by #193.