Closed bigcat88 closed 3 days ago
Technically you can still use onnx on silicon correction... you'd swap to onnxruntime-silicon i believe like....
onnxruntime-gpu; sys_platform != 'linux'
onnxruntime-silicon; sys_platform == 'darwin'
Technically you can still use onnx on silicon correction... you'd swap to onnxruntime-silicon i believe like....
onnxruntime-gpu; sys_platform != 'linux' onnxruntime-silicon; sys_platform == 'darwin'
Many thanks for the precise comment, I didn't know about the existence of the "onnxruntime-silicon" package.
Updated both pull requests.
@cubiq I'll will be much appreciated for any comments from your side, what is missing or what should be changed to these changes appear in main branch.
Is this tested working? Not sure if this should be left to the documentation
Is this tested working? Not sure if this should be left to the documentation
Yes, I added option to UI to be able to select and tested it.
When CoreML
is selected as a provider:
Applied providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CoreMLExecutionProvider': {}}
find model: /Users/shurik/PycharmProjects/Visionatrix/vix_backend/models/insightface/models/antelopev2/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Tested on a few workflows, all is fine, both on CPU or on CoreML.
Good day.
There are no binary wheels for macOS for this package and there won't be any, because the package won't work on mac.
There are no binary wheels for linux aarch64(arm64) either, so they are also added to ignore.
I hope this little pull request will be useful.