Lightning-Universe / InVideo-search_app

Apache License 2.0
14 stars 9 forks source link

Bump torch from 2.0.0 to 2.0.1 #46

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 1 year ago

Bumps torch from 2.0.0 to 2.0.1.

Release notes

Sourced from torch's releases.

PyTorch 2.0.1 Release, bug fix release

This release is meant to fix the following issues (regressions / silent correctness):

  • Fix _canonical_mask throws warning when bool masks passed as input to TransformerEncoder/TransformerDecoder (#96009, #96286)
  • Fix Embedding bag max_norm=-1 causes leaf Variable that requires grad is being used in an in-place operation #95980
  • Fix type hint for torch.Tensor.grad_fn, which can be a torch.autograd.graph.Node or None. #96804
  • Can’t convert float to int when the input is a scalar np.ndarray. #97696
  • Revisit torch._six.string_classes removal #97863
  • Fix module backward pre-hooks to actually update gradient #97983
  • Fix load_sharded_optimizer_state_dict error on multi node #98063
  • Warn once for TypedStorage deprecation #98777
  • cuDNN V8 API, Fix incorrect use of emplace in the benchmark cache #97838

Torch.compile:

  • Add support for Modules with custom getitem method to torch.compile #97932
  • Fix improper guards with on list variables. #97862
  • Fix Sequential nn module with duplicated submodule #98880

Distributed:

  • Fix distributed_c10d's handling of custom backends #95072
  • Fix MPI backend not properly initialized #98545

NN_frontend:

  • Update Multi-Head Attention's doc string #97046
  • Fix incorrect behavior of is_causal paremeter for torch.nn.TransformerEncoderLayer.forward #97214
  • Fix error for SDPA on sm86 and sm89 hardware #99105
  • Fix nn.MultiheadAttention mask handling #98375

DataLoader:

  • Fix regression for pin_memory recursion when operating on bytes #97737
  • Fix collation logic #97789
  • Fix Ppotentially backwards incompatible change with DataLoader and is_shardable Datapipes #97287

MPS:

  • Fix LayerNorm crash when input is in float16 #96208
  • Add support for cumsum on int64 input #96733
  • Fix issue with setting BatchNorm to non-trainable #98794

Functorch:

  • Fix Segmentation Fault for vmaped function accessing BatchedTensor.data #97237
  • Fix index_select support when dim is negative #97916
  • Improve docs for autograd.Function support #98020
  • Fix Exception thrown when running Migration guide example for jacrev #97746

Releng:

Torch.optim:

  • Fix fused AdamW causes NaN loss #95847
  • Fix Fused AdamW has worse loss than Apex and unfused AdamW for fp16/AMP #98620

The release tracker should contain all relevant pull requests related to this release as well as links to related issues

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)