Hello I inferenced the model Faster R-CNN from model Zoo ONNX and an error occurs during the caclulate_macs:
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Should I estimate the macs during the inference? How is supposed to do that?
Hello I inferenced the model Faster R-CNN from model Zoo ONNX and an error occurs during the
caclulate_macs
:Should I estimate the macs during the inference? How is supposed to do that?