Open Chris-fullerton opened 1 year ago
you can check the document of onnxruntime
import onnxruntime
# Create a session options object
options = onnxruntime.SessionOptions()
# Retrieve the available providers
providers = onnxruntime.get_available_providers()
# Print the list of available providers
print(f"Available ONNX Runtime providers: {providers}")
# Load the ONNX model
model_path = "/Users/xxx/.insightface/models/buffalo_l/det_10g.onnx"
model = onnxruntime.InferenceSession(model_path, providers=providers)
# Check if the CoreMLExecutionProvider is available
if "CoreMLExecutionProvider" in model.get_providers():
print("CoreMLExecutionProvider is available")
else:
print("CoreMLExecutionProvider is not available")
Available ONNX Runtime providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider']
2023-03-23 20:04:14.396787 [W:onnxruntime:, helper.cc:61 IsInputSupported] Dynamic shape is not supported for now, for input:input.1
CoreMLExecutionProvider is available
@nttstar seems not support?
Dynamic shape is not supported for now, for input:input.1
Support now with :
# Name Version Build Channel
onnx 1.15.0 pypi_0 pypi
onnxruntime-silicon 1.16.0 pypi_0 pypi
Same testing code as mentioned earlier:
Available ONNX Runtime providers: ['CoreMLExecutionProvider', 'CPUExecutionProvider']
2024-01-12 09:51:55.623409 [W:onnxruntime:, coreml_execution_provider.cc:81 GetCapability] CoreMLExecutionProvider::GetCapability, number of partitions supported by CoreML: 7 number of nodes in the graph: 153 number of nodes supported by CoreML: 129
CoreMLExecutionProvider is available
I know that
get_default_providers()
return['CUDAExecutionProvider', 'CPUExecutionProvider']
here: https://github.com/deepinsight/insightface/blob/30295de48907e04077d6d22a9a8f580b525822ce/python-package/insightface/model_zoo/model_zoo.py#L70but i am not sure whether insightface's default models:
bafflo_l
support CoreML, does it?