Closed ChristianWeyer closed 1 week ago
Hi Christian,
When you run a python shell, import onnxruntime
and then call onnxruntime.get_available_providers()
, what output do you get? If the CoreMLExecutionProvider
is not in that list, then that must be part of the problem.
Hi @faahmed !
this is the output:
❯ python
Python 3.11.9 (main, Apr 19 2024, 11:43:47) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnxruntime
>>> onnxruntime.get_available_providers()
['CoreMLExecutionProvider', 'CPUExecutionProvider']
>>>
I would try something like python run.py --gpu-vendor apple
or a similar command that uses the (deprecated) --gpu-vendor
flag. On the surface, there seems to be some inconsistency in how the flags are processed (which is why I would imagine the --gpu-vendor
flag hasn't been fully removed). I hope that works, but I'm out of ideas.
Hi @ChristianWeyer ,
I did a tiny bit more digging here. ONNX runtime would need to support something like MetalExecutionProvider
and this project would need to allow you to specify a suitable flag for that.
It turns out that Apple CoreML does not equal GPU acceleration in most cases. It's using CPU features that are designed to speed up inference, but that don't actually leverage the GPU.
Hey, thanks for digging deeper.
So that means, we are doomed for now on our Macs? ;-)
Yes 😬
I really cant help on this as I don't have mac either thus allowing the public to push their commits in case they see a way to improve it :)
OK, thanks guys!
Hi all,
I successfully cloned the repo, downloaded the models and started the app.
However, when doing live mode, it is veeery slow and
asitop
tells me that the GPU is not really used.I started the app with
python run.py --execution-provider coreml
Any ideas what I am doing wrong here? Thanks!