ray-project / ray_lightning

Pytorch Lightning Distributed Accelerators using Ray
Apache License 2.0
211 stars 34 forks source link

Bump pytorch-lightning from 1.4.7 to 1.5.7 #112

Closed dependabot[bot] closed 2 years ago

dependabot[bot] commented 2 years ago

Bumps pytorch-lightning from 1.4.7 to 1.5.7.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.5.7] - 2021-12-21

Fixed

  • Fixed NeptuneLogger when using DDP (#11030)
  • Fixed a bug to disable logging hyperparameters in logger if there are no hparams (#11105)
  • Avoid the deprecated onnx.export(example_outputs=...) in torch 1.10 (#11116)
  • Fixed an issue when torch-scripting a LightningModule after training with Trainer(sync_batchnorm=True) (#11078)
  • Fixed an AttributeError occuring when using a CombinedLoader (multiple dataloaders) for prediction (#11111)
  • Fixed bug where Trainer(track_grad_norm=..., logger=False) would fail (#11114)
  • Fixed an incorrect warning being produced by the model summary when using bf16 precision on CPU (#11161)

Changed

  • DeepSpeed does not require lightning module zero 3 partitioning (#10655)
  • The ModelCheckpoint callback now saves and restores attributes best_k_models, kth_best_model_path, kth_value, and last_model_path (#10995)

Contributors

@​awaelchli @​borchero @​carmocca @​guyang3532 @​kaushikb11 @​ORippler @​Raalsky @​rohitgr7 @​SeanNaren

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.6] - 2021-12-15

Fixed

  • Fixed a bug where the DeepSpeedPlugin arguments cpu_checkpointing and contiguous_memory_optimization were not being forwarded to deepspeed correctly (#10874)
  • Fixed an issue with NeptuneLogger causing checkpoints to be uploaded with a duplicated file extension (#11015)
  • Fixed support for logging within callbacks returned from LightningModule (#10991)
  • Fixed running sanity check with RichProgressBar (#10913)
  • Fixed support for CombinedLoader while checking for warning raised with eval dataloaders (#10994)
  • The TQDM progress bar now correctly shows the on_epoch logged values on train epoch end (#11069)
  • Fixed bug where the TQDM updated the training progress bar during trainer.validate (#11069)

Contributors

@​carmocca @​jona-0 @​kaushikb11 @​Raalsky @​rohitgr7

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.5] - 2021-12-07

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.5.7] - 2021-12-21

Fixed

  • Fixed NeptuneLogger when using DDP (#11030)
  • Fixed a bug to disable logging hyperparameters in logger if there are no hparams (#11105)
  • Avoid the deprecated onnx.export(example_outputs=...) in torch 1.10 (#11116)
  • Fixed an issue when torch-scripting a LightningModule after training with Trainer(sync_batchnorm=True) (#11078)
  • Fixed an AttributeError occuring when using a CombinedLoader (multiple dataloaders) for prediction (#11111)
  • Fixed bug where Trainer(track_grad_norm=..., logger=False) would fail (#11114)
  • Fixed an incorrect warning being produced by the model summary when using bf16 precision on CPU (#11161)

Changed

  • DeepSpeed does not require lightning module zero 3 partitioning (#10655)
  • The ModelCheckpoint callback now saves and restores attributes best_k_models, kth_best_model_path, kth_value, and last_model_path (#10995)

[1.5.6] - 2021-12-15

Fixed

  • Fixed a bug where the DeepSpeedPlugin arguments cpu_checkpointing and contiguous_memory_optimization were not being forwarded to deepspeed correctly (#10874)
  • Fixed an issue with NeptuneLogger causing checkpoints to be uploaded with a duplicated file extension (#11015)
  • Fixed support for logging within callbacks returned from LightningModule (#10991)
  • Fixed running sanity check with RichProgressBar (#10913)
  • Fixed support for CombinedLoader while checking for warning raised with eval dataloaders (#10994)
  • The TQDM progress bar now correctly shows the on_epoch logged values on train epoch end (#11069)
  • Fixed bug where the TQDM updated the training progress bar during trainer.validate (#11069)

[1.5.5] - 2021-12-07

Fixed

  • Disabled batch_size extraction for torchmetric instances because they accumulate the metrics internally (#10815)
  • Fixed an issue with SignalConnector not restoring the default signal handlers on teardown when running on SLURM or with fault-tolerant training enabled (#10611)
  • Fixed SignalConnector._has_already_handler check for callable type (#10483)
  • Fixed an issue to return the results for each dataloader separately instead of duplicating them for each (#10810)
  • Improved exception message if rich version is less than 10.2.2 (#10839)
  • Fixed uploading best model checkpoint in NeptuneLogger (#10369)
  • Fixed early schedule reset logic in PyTorch profiler that was causing data leak (#10837)
  • Fixed a bug that caused incorrect batch indices to be passed to the BasePredictionWriter hooks when using a dataloader with num_workers > 0 (#10870)
  • Fixed an issue with item assignment on the logger on rank > 0 for those who support it (#10917)
  • Fixed importing torch_xla.debug for torch-xla<1.8 (#10836)
  • Fixed an issue with DDPSpawnPlugin and related plugins leaving a temporary checkpoint behind (#10934)
  • Fixed a TypeError occuring in the SingalConnector.teardown() method (#10961)

[1.5.4] - 2021-11-30

... (truncated)

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 2 years ago

Superseded by #116.