Open Nugget2920 opened 1 year ago
Also pretty much every other time i use it this error pops up
raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
I'm getting this error on my anaconda version of trying to run this. I can't figure it out either. And there is a problem with the colab version as well.
Here is how I solved this issue.
Go to "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py" Edit the file on the on the line 56 or somewhere above it.
I did this in conda so i edited the file in "C:\Users\Jatin\anaconda3\envs\simswap\Lib\site-packages\insightface\model_zoo" I edited the file model_zoo.py where i replaced.
def get_model(self): session = onnxruntime.InferenceSession()
with
def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
after this i had errors about numpy np.float error in reverse2original file. I changed np.float to np.float64 in three lines in that python file.
solved it for me too! thanks!
running GTX3060ti with cuda12.0 on windows 11
mark
File "D:\SimSwap\SimSwap\test_video_swapsingle.py", line 58, in
app = Face_detect_crop(name='antelope', root='./insightface_func/models')
File "D:\SimSwap\SimSwap\insightface_func\face_detect_crop_single.py", line 40, in init
model = model_zoo.get_model(onnx_file)
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model
model = router.get_model()
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model
session = onnxruntime.InferenceSession(self.onnx_file, None)
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 375, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)