With @LoreMoretti and @traversaro, we observed that during inference, onnxruntime spawns n threads, where n is the number of cores, as explained in https://onnxruntime.ai/docs/performance/tune-performance/threading.html. This results in a slowdown for other threads and processes. This PR allows the user to set the number of threads used for the inference
With @LoreMoretti and @traversaro, we observed that during inference, onnxruntime spawns n threads, where n is the number of cores, as explained in https://onnxruntime.ai/docs/performance/tune-performance/threading.html. This results in a slowdown for other threads and processes. This PR allows the user to set the number of threads used for the inference