Added all DDP params to be exposed through hpu parallel strategy (#13067)
Changed
Keep torch.backends.cudnn.benchmark=False by default (unlike in v1.6.{0-4}) after speed and memory problems depending on the data used. Please consider tuning Trainer(benchmark) manually. (#13154)
Prevent modification of torch.backends.cudnn.benchmark when Trainer(benchmark=...) is not set (#13154)
Fixed
Fixed an issue causing zero-division error for empty dataloaders (#12885)
Fixed mismatching default values for the types of some arguments in the DeepSpeed and Fully-Sharded strategies which made the CLI unable to use them (#12989)
Avoid redundant callback restore warning while tuning (#13026)
Fixed Trainer(precision=64) during evaluation which now uses the wrapped precision module (#12983)
Fixed an issue to use wrapped LightningModule for evaluation during trainer.fit for BaguaStrategy (#12983)
Fixed an issue wrt unnecessary usage of habana mixed precision package for fp32 types (#13028)
Fixed the number of references of LightningModule so it can be deleted (#12897)
Fixed materialize_module setting a module's child recursively (#12870)
Fixed issue where the CLI could not pass a Profiler to the Trainer (#13084)
Fixed torchelastic detection with non-distributed installations (#13142)
Fixed logging's step values when multiple dataloaders are used during evaluation (#12184)
Fixed estimated_stepping_batches requiring distributed comms in configure_optimizers for the DeepSpeedStrategy (#13350)
Fixed bug with Python version check that prevented use with development versions of Python (#13420)
The loops now call .set_epoch() also on batch samplers if the dataloader has one wrapped in a distributed sampler (#13396)
Fixed the restoration of log step during restart (#13467)
[1.6.4] - 2022-06-01
Added
Added all DDP params to be exposed through hpu parallel strategy (#13067)
Changed
Keep torch.backends.cudnn.benchmark=False by default (unlike in v1.6.{0-4}) after speed and memory problems depending on the data used. Please consider tuning Trainer(benchmark) manually. (#13154)
Prevent modification of torch.backends.cudnn.benchmark when Trainer(benchmark=...) is not set (#13154)
Fixed
Fixed an issue causing zero-division error for empty dataloaders (#12885)
Fixed mismatching default values for the types of some arguments in the DeepSpeed and Fully-Sharded strategies which made the CLI unable to use them (#12989)
Avoid redundant callback restore warning while tuning (#13026)
Fixed Trainer(precision=64) during evaluation which now uses the wrapped precision module (#12983)
Fixed an issue to use wrapped LightningModule for evaluation during trainer.fit for BaguaStrategy (#12983)
Fixed an issue wrt unnecessary usage of habana mixed precision package for fp32 types (#13028)
Fixed the number of references of LightningModule so it can be deleted (#12897)
Fixed materialize_module setting a module's child recursively (#12870)
Fixed issue where the CLI could not pass a Profiler to the Trainer (#13084)
Fixed torchelastic detection with non-distributed installations (#13142)
Fixed logging's step values when multiple dataloaders are used during evaluation (#12184)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Bumps pytorch-lightning from 1.5.9 to 1.6.5.
Release notes
Sourced from pytorch-lightning's releases.
... (truncated)
Changelog
Sourced from pytorch-lightning's changelog.
... (truncated)
Commits
ff53616
Weekly patch release v1.6.5 (#13481)74b1317
Update__version__
field (#13200)a5f82f5
Fix initialization of optimizers in DDP Strategy (#11952)f89b181
Fix epoch logging on train epoch end (#13025)902774a
SpecifyTrainer(benchmark=False)
in parity benchmarks (#13182)bd50b26
Fix logging's step values when multiple dataloaders are used during evaluatio...29b9963
Fix not running test codes (#13089)2acff1c
Avoid changing the currentcudnn.benchmark
value (#13154)3c06cd8
Revert "Update deepspeed requirement from <0.6.0 to <0.7.0 in /requirements (...7e52126
Fix standalone test collection (#13177)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)