gmalivenko / onnx-opcounter

Count number of parameters / MACs / FLOPS for ONNX models.
Apache License 2.0
89 stars 21 forks source link

Value error: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled #5

Open Apisteftos opened 1 year ago

Apisteftos commented 1 year ago

Hello I inferenced the model Faster R-CNN from model Zoo ONNX and an error occurs during the caclulate_macs:


raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...) 

Should I estimate the macs during the inference? How is supposed to do that?