QDucasse / nn_benchmark

🧠 Benchmark facility to train networks on different datasets for PyTorch/Brevitas
MIT License
24 stars 1 forks source link

Bump onnxruntime from 1.2.0 to 1.8.0 #24

Closed dependabot-preview[bot] closed 3 years ago

dependabot-preview[bot] commented 3 years ago

Bumps onnxruntime from 1.2.0 to 1.8.0.

Release notes

Sourced from onnxruntime's releases.

ONNX Runtime v1.7.2

This is a minor patch release on 1.7.1 with the following changes:

ONNX Runtime v1.7.1

The Microsoft.ML.OnnxRuntime.Gpu and Microsoft.ML.OnnxRuntime.Managed packages are uploaded to Nuget.org. Please note the version numbers for the Microsoft.ML.OnnxRuntime.Managed package.

ONNX Runtime v1.7.0

Announcements

Starting from this release, all ONNX Runtime CPU packages are now built without OpenMP. A version with OpenMP is available on Nuget (Microsoft.ML.OnnxRuntime.OpenMP) and PyPi (onnxruntime-openmp). Please report any issues in GH Issues.

Note: The 1.7.0 GPU package is uploaded on this Azure DevOps Feed due to the size limit on Nuget.org. Please use 1.7.1 for the GPU package through Nuget.

Key Feature Updates

General

  • Mobile
    • Custom operators now supported in the ONNX Runtime Mobile build
    • Added ability to reduce types supported by operator kernels to only the types required by the models
      • Expect a 25-33% reduction in binary size contribution from the kernel implementations. Reduction is model dependent, but testing with common models like Mobilenet v2, SSD Mobilenet and Mobilebert achieved reductions in this range.
  • Custom op support for dynamic input
  • MKLML/openblas/jemalloc build configs removed
  • Removed dependency on gemmlowp
  • [Experimental] Audio Operators
    • Fourier Transforms (DFT, IDFT, STFT), Windowing Functions (Hann, Hamming, Blackman), and a MelWeightMatrix operator in "com.microsoft.experimental” domain
    • Buildable using ms_experimental build flag (included in Microsoft.AI.MachineLearning NuGet package)

Performance

  • Quantization
    • Quantization tool now supports quantization of models in QDQ (QuantizeLinear-DequantizeLinear) format
    • Depthwise Conv quantization performance improvement
    • Quantization support added for Pad, Split and MaxPool for channel last
    • QuantizeLinear performance improvement on AVX512
    • Optimization: Fusion for Conv + Mul/Add
  • Transformers
    • Longformer Attention CUDA kernel memory footprint reduction
    • Einsum Float16 CUDA kernel for ALBERT and XLNet
    • Python optimizer tool now supports fusion for BART
    • CPU profiling tool for transformers models

APIs and Packages

  • Python 3.8 and 3.9 support added for all platforms, removed support for 3.5
  • ARM32/64 Windows builds are now included in the CPU Nuget and zip packages
  • WinML
    • .NET5 support - will work with .NET5 Standard 2.0 Projections
    • Image descriptors expose NominalPixelRange properties
      • Native support added for additional pixel ranges [0..1] and [-1..1] in image models.
      • A new property is added to the ImageFeatureDescriptor runtimeclass to expose the ImageNominalPixelRange property in ImageFeatureDescriptor. Other similar properties exposed are the image’s BitmapPixelFormat and BitmapAlphaMode.
    • Bug fixes and performance improvements, including #6249

... (truncated)

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language - `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com): - Update frequency (including time of day and day of week) - Pull request limits (per update run and/or open at any time) - Out-of-range updates (receive only lockfile updates, if desired) - Security updates (receive only security updates, if desired)
dependabot-preview[bot] commented 3 years ago

Superseded by #26.