Closed RJKeevil closed 3 months ago
Thanks for pointing it out! Here is my upgrade process. None of it is that difficult; it's mostly just checking details:
Replace the onnxruntime_c_api.h
header with the new version extracted from the onnxruntime release.
Update the shared libraries under test_data/
:
onnxruntime.dll
is the non-GPU version for x86_64 windows.onnxruntime_arm64.so
. is the version for arm64 linux, also without GPU support (just the version extracted from the official release).onnxruntime_arm64.dylib
is the version for arm64 osx, not the "universal" osx library. (The osx libraries always include CoreML support, as far as I know.)Make sure the function pointers in the DummyOrtDMLAPI
struct haven't changed order. See the comment near the top of onnxruntime_wrapper.c
about this.
Make sure that all of the ONNX_TENSOR_ELEMENT_DATA_TYPE_*
enum values are still exposed in onnxruntime_go.go
.
onnxruntime_c_api.h
for 1.18.0, it looks like nothing has changed here, fortunately.Replace all references to version 1.17.1 in the README and anywhere else in the Go or C files.
Check the CUDA version requirements mentioned in the README to make sure they still match what's required by onnxruntime.
Make sure all of the tests and benchmarks run and pass (go test -v -bench=.
).
ONNXRUNTIME_SHARED_LIBRARY_PATH
environment variable to point to the correct version before running the tests.In the past, I've somewhat-intentionally waited until the .1
updates, so I can wait until any major bugfixes have landed after the .0
releases. Though I'm not married to the idea. If you think that a 1.18.1
release is likely in the next month or so, I'd somewhat prefer holding off until it is ready. Updating for every minor release means adding yet another copy of the test binaries into the commit history, which is a nuisance I'd like to avoid. That being said, if you think 1.18.0 is good enough and likely to be stable, or just want the update faster, then I wouldn't turn down an earlier PR.
For reference, here is the commit where I updated to 1.17.1: fff18e75229e0
Feel free to work on a PR yourself if you want! Otherwise, I will likely take care of it myself in a couple weeks, after I've gotten a better feel about whether a version 1.18.1 is likely.
Fantastic instructions! Thank you, lets see how far I can get over this weekend.
Started a draft PR (https://github.com/yalue/onnxruntime_go/pull/55) that should be almost there, but hitting a compilation error you can perhaps help with?
Edit: i think its just some cgo shenannigans in my env, should be able to fix
Yes, I just tested your PR (only on Windows and without GPU stuff), and the tests passed. So it likely is some local issues for you.
Thanks @yalue, yes i can't seem to get gcc/mingw to play nice on windows, I was however able to run all tests on linux x86-64, including with CUDA:
=== RUN TestCUDASession --- PASS: TestCUDASession (2.72s)
I think combined with your successful tests on Windows we can be confident this is looking ok? Perhaps if you also have time to test DirectML on windows?
I don't have DirectML set up on my Windows machine, but I will test CUDA and TensorRT on arm64 Linux. Assuming those tests pass I'll go ahead and merge the PR. (Probably tomorrow.) Thanks for the help!
The tests and benchmarks all seemed fine on CUDA on Windows, and TensorRT and CUDA on arm64 Linux. Here's the benchmark portion of my go test -v -bench=.
output on the orin nano:
goos: linux
goarch: arm64
pkg: github.com/yalue/onnxruntime_go
BenchmarkOpSingleThreaded
BenchmarkOpSingleThreaded-6 1 3395781856 ns/op
BenchmarkOpMultiThreaded
BenchmarkOpMultiThreaded-6 1 1020700102 ns/op
BenchmarkCUDASession
BenchmarkCUDASession-6 2 676413850 ns/op
BenchmarkTensorRTSession
BenchmarkTensorRTSession-6 4 328551902 ns/op
BenchmarkCoreMLSession
onnxruntime_test.go:1425: Couldn't enable CoreML: Your platform or onnxruntime library does not support CoreML. This may be due to your system or onnxruntime library version not supporting CoreML.
--- SKIP: BenchmarkCoreMLSession
BenchmarkDirectMLSession
onnxruntime_test.go:1457: Couldn't enable DirectML: Specified provider is not supported.. This may be due to your system or onnxruntime library version not supporting DirectML.
--- SKIP: BenchmarkDirectMLSession
BenchmarkOpenVINOSession
onnxruntime_test.go:1490: Couldn't enable OpenVINO: /home/otternes/onnxruntime_stuff/onnxruntime/onnxruntime/core/session/provider_bridge_ort.cc:1426 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_openvino.so with error: libonnxruntime_providers_openvino.so: cannot open shared object file: No such file or directory
. This may be due to your system or onnxruntime library version not supporting OpenVINO.
--- SKIP: BenchmarkOpenVINOSession
PASS
ok github.com/yalue/onnxruntime_go 127.649s
So I went ahead and merged the PR. Thanks for the help.
Thanks @yalue , happy to help! I'll probably wait for a release to pull it into Hugot (https://github.com/knights-analytics/hugot), do you have a plan currently for when the next release might be?
Oops, I just forgot to add the tag. It's now in v1.10.0
.
Hi, just a heads up that ONNX Runtime v1.18.0 was released today, I'm happy to create a PR to upgrade to this. Do you have a process currently for this upgrade, I imagine i need to sense check headers etc?