artefactory / NLPretext

All the goto functions you need to handle NLP use-cases, integrated in NLPretext
https://artefactory.github.io/NLPretext/
Apache License 2.0
139 stars 13 forks source link

chore(deps): update torch requirement from ^1.9.0 to >=1.9,<3.0 #309

Open dependabot[bot] opened 10 months ago

dependabot[bot] commented 10 months ago

Updates the requirements on torch to permit the latest version.

Release notes

Sourced from torch's releases.

PyTorch 2.1.1 Release, bug fix release

This release is meant to fix the following issues (regressions / silent correctness):

  • Remove spurious warning in comparison ops (#112170)
  • Fix segfault in foreach_* operations when input list length does not match (#112349)
  • Fix cuda driver API to load the appropriate .so file (#112996)
  • Fix missing CUDA initialization when calling FFT operations (#110326)
  • Ignore beartype==0.16.0 within the onnx package as it is incompatible (#111861)
  • Fix the behavior of torch.new_zeros in onnx due to TorchScript behavior change (#111694)
  • Remove unnecessary slow code in torch.distributed.checkpoint.optimizer.load_sharded_optimizer_state_dict (#111687)
  • Add planner argument to torch.distributed.checkpoint.optimizer.load_sharded_optimizer_state_dict (#111393)
  • Continue if param not exist in sharded load in torch.distributed.FSDP (#109116)
  • Fix handling of non-contiguous bias_mask in torch.nn.functional.scaled_dot_product_attention (#112673)
  • Fix the meta device implementation for nn.functional.scaled_dot_product_attention (#110893)
  • Fix copy from mps to cpu device when storage_offset is non-zero (#109557)
  • Fix segfault in torch.sparse.mm for non-contiguous inputs (#111742)
  • Fix circular import between Dynamo and einops (#110575)
  • Verify flatbuffer module fields are initialized for mobile deserialization (#109794)

The pytorch/pytorch#110961 contains all relevant pull requests related to this release as well as links to related issues.

Changelog

Sourced from torch's changelog.

Releasing PyTorch

Release Compatibility Matrix

Following is the Release Compatibility Matrix for PyTorch releases:

PyTorch version Python Stable CUDA Experimental CUDA
2.1 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26

... (truncated)

Commits


You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Note Automatic rebases have been disabled on this pull request as it has been open for over 30 days.