intel / onnxruntime

ONNX Runtime: cross-platform, high performance scoring engine for ML models
MIT License
56 stars 22 forks source link

fix: Fixed the inference with Debug flags enabled for perf counters status logging #386

Closed ankitm3k closed 2 months ago

ankitm3k commented 2 months ago

While using the DEBUG mode build flags viz. ORT_OPENVINO_ENABLE_DEBUG=1 and ORT_OPENVINO_ENABLE_CI_LOG=1, the onnxruntime_perf_test app inference seg faults after logging below -

......... ......... Inference successful printing elements of the vector (inferrequests): ovInfReq.query_state()

After which we must be getting performance benchmark statistics like performance counters and inference times.

To reproduce this issue, the user must build the project in DEBUG mode and use above flags before inference.

This PR fixes the issue.

preetha-intel commented 2 months ago

LGTM

sfatimar commented 2 months ago

LGTM 1.18 is frozen. Preetha is upstreaming a branch tomorrow to MSFT main, if you are confident of these changes You can ask her to pull this in her branch

vthaniel commented 2 months ago

@ankitm3k Would it be better if the creation of "inferrequest" is kept within a "#ifndef NDEBUG" Looks like this object is only used during debug builds

ankitm3k commented 2 months ago

We can add that too, but this is --config Debug specific which gets affected on both Windows and Linux compilers. I think the current fix is correct.