bit-bots / YOEO

YouOnlyEncodeOnce - A CNN for Embedded Object Detection and Semantic Segmentation
GNU General Public License v3.0
22 stars 4 forks source link

Bump onnxruntime from 1.10.0 to 1.11.0 #37

Closed dependabot[bot] closed 2 years ago

dependabot[bot] commented 2 years ago

Bumps onnxruntime from 1.10.0 to 1.11.0.

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.11.0

Key Updates

General

  • Support for ONNX 1.11 with opset 16
  • Updated protobuf version to 3.18.x
  • Enable usage of Mimalloc (details)
  • Transformer model helper scripts
  • On Windows, error strings in OrtStatus are now encoded in UTF-8. When you need to print it out to screen, first convert it to a wide char string by using the MultiByteToWideChar Windows API.

Performance

  • Memory utilization related performance improvements (e.g. elimination of vectors for small dims)
  • Performance variance stability improvement through dynamic cost model session option (details)
  • Added support for CUDA graphs (preview).
  • New quantization data format support: S8S8 in QDQ format
    • Added s8s8 kernels for ARM64
    • Support to convert s8s8 to u8s8 automatically for x64
  • Improved performance on ARM64 for quantized CNN model through:
    • New kernels for quantized depthwise Conv
    • Improved symmetrically quantized Conv by leveraging indirect buffer
    • New Gemm kernels for symmetric quantized Conv and MatMul
  • General quantization improvements, including new quantized operators (Resize, ArgMax) and quantization tool updates

Packages

  • Nuget packages
    • C# packages now tested with .NET 5. .NET Core 2.1 support is deprecated as it has reached end of life support on August 21, 2021. We will closely follow .NET's support policy
    • Removed PDB files. These are attached as release artifacts below.
  • Pypi packages
    • Python 3.6 is deprecated as it has reached EOL December 2021. Supported Python versions : 3.7-3.9

Execution Providers

  • CUDA
    • Enable CUDA provider option configuration for C# to support workspace size configuration from and fix binary compatibility of CUDAProviderOptions C API
    • Preview support for CUDA Graphs (details)
  • TensorRT
    • TRT 8.2.3 support
    • Memory footprint optimizations
    • Support protobuf >= 3.11
    • Updated flatbuffers version to 2.0
    • Misc Bug Fixes
  • DirectML
    • Updated more operators to opset 13 (QuantizeLinear, DequantizeLinear, ReduceSum, Split, Squeeze, Unsqueeze, ReduceSum).
  • OpenVINO
    • Device type check - checks if the physical device is available on host at very early stage when using ONNXRT APIs
    • Reduce CPU utilizations with IGPU path using throttling

Mobile

  • Added general support for converting a model to NHWC layout at runtime

... (truncated)

Commits
  • b713855 Release 1.11.0 cherry pick round 1 (#10915)
  • e0cec5c skip optional related models from opset16 (#10840)
  • 7f0b0d0 Fix bug in MemcpyToHost (#10829)
  • e645626 remove final six (#10822)
  • 22c4755 Make QDQSelectorActionTransformer() is_int8_allowed parameter required. (#10823)
  • cc6bc34 Update protobuf submodule (#10801)
  • 58521fb Make training CUDA kernels to adhere established code structure patterns (#10...
  • 4ef81b1 Making the Java tests faster by optionally disabling ones which require runni...
  • ae97ecf Fix CPU, CUDA Selu activation logic (#10771)
  • c147c9d Remove ORT_ENABLE_RUNTIME_OPTIMIZATION_IN_MINIMAL_BUILD. (#10778)
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 2 years ago

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.