As noted in the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
Support for plug-in custom thread creation and join functions to enable usage of external threads
Optional type support from opset15
Performance
Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
ARM64: new kernels for depthwise quantized Conv.
Tensor shape optimization to avoid allocating heap memory in most cases - #9542
Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation
API
Python
Following through on the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
C/C++
New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - #9141
Updated Invalid -> OrtInvalidAllocator
Updated every item in OrtCudnnConvAlgoSearch to a safer global name
WinML
New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
OrtSessionOptionsAppendExecutionProviderEx_DML
DmlCreateGPUAllocationFromD3DResource
DmlFreeGPUAllocation
DmlGetD3D12ResourceFromAllocation
Bug fix: LearningModel::LoadFromFilePath in UWP apps
Packages
Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official packages and can also be built using "-arch arm64 -arch x86_64"
Python GPU package now includes both TensorRT and CUDA EPs. Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate TensorRT dependencies and CUDA dependencies installed.
Execution Providers
TensorRT EP
Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Bumps onnxruntime from 1.9.0 to 1.10.0.
Release notes
Sourced from onnxruntime's releases.
... (truncated)
Commits
0d9030e
add copyright (#9943)1e31b2c
Release 1.10.0 Cherrypick Round 2 (#9928)31a4742
Release 1.10.0 cherry pick round 1 (#9886)8afd969
fix build break in release pipeline for Node.js binding test (#9850) (#9860)6749e9f
Cuda instance_norm fix (#9826)24f3d72
relax atol and rtol for einsum ut (#9842)8564fc1
POWER10: Add optimized dgemm kernel (#9652)bf5e9a5
bumping up ORT_API_VERSION to 10 (#9838)fb4a8e1
Limit inclusion of Xamarin mobile target frameworks. (#9834)74ca417
[js/web] optimize bundle file size (#9817)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)