neuralchen / SimSwap

An arbitrary face-swapping framework on images and videos with one single trained model!
Other
4.55k stars 895 forks source link

ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. #316

Open santosadrian opened 2 years ago

santosadrian commented 2 years ago

(simswap) C:\Users\foldd\Desktop\SimSwap>python test_video_swapmulti.py --crop_size 224 --use_mask --name people --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path lo.png --video_path pt.mp4 --output_path ./output/multi_test_swapmulti-pt.mp4 --temp_path ./temp ------------ Options ------------- Arc_path: arcface_model/arcface_checkpoint.tar aspect_ratio: 1.0 batchSize: 8 checkpoints_dir: ./checkpoints cluster_path: features_clustered_010.npy crop_size: 224 data_type: 32 dataroot: ./datasets/cityscapes/ display_winsize: 512 engine: None export_onnx: None feat_num: 3 fineSize: 512 fp16: False gpu_ids: [0] how_many: 50 id_thres: 0.03 image_size: 224 input_nc: 3 instance_feat: False isTrain: False label_feat: False label_nc: 0 latent_size: 512 loadSize: 1024 load_features: False local_rank: 0 max_dataset_size: inf model: pix2pixHD multisepcific_dir: ./demo_file/multispecific nThreads: 2 n_blocks_global: 6 n_blocks_local: 3 n_clusters: 10 n_downsample_E: 4 n_downsample_global: 3 n_local_enhancers: 1 name: people nef: 16 netG: global ngf: 64 niter_fix_global: 0 no_flip: False no_instance: False no_simswaplogo: False norm: batch norm_G: spectralspadesyncbatch3x3 ntest: inf onnx: None output_nc: 3 output_path: ./output/multi_test_swapmulti-pt.mp4 phase: test pic_a_path: lo.png pic_b_path: ./crop_224/zrf.jpg pic_specific_path: ./crop_224/zrf.jpg resize_or_crop: scale_width results_dir: ./results/ semantic_nc: 3 serial_batches: False temp_path: ./temp tf_log: False use_dropout: False use_encoded_image: False use_mask: True verbose: False video_path: pt.mp4 which_epoch: latest -------------- End ---------------- Traceback (most recent call last): File "test_video_swapmulti.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "C:\Users\foldd\Desktop\SimSwap\insightface_func\face_detect_crop_multi.py", line 40, in init model = model_zoo.get_model(onnx_file) File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

skintflickz commented 2 years ago

Is there a question in that post somewhere ?

santosadrian commented 2 years ago

Sorry, I thought that this is not a forum to post questions. But if you ask, sure... how can I fix this error?

Thank you.

skintflickz commented 2 years ago

Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0

i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out.

Somehow that sorted the problem you have above.

However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s

mossan0101 commented 2 years ago

Try this one: https://github.com/mike9251/simswap-inference-pytorch It is faster than the official repository and supports the RTX3000 series for inference.

k128 commented 2 years ago

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23
from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

white0rchardpUnK commented 2 years ago

Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0

i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out.

Somehow that sorted the problem you have above.

However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s

I have the same problem with my 10 series graphics card. The old version does need to be installed because the training code is not updated based on the new version.

woctezuma commented 1 year ago

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

I can confirm that editing the following file works.

File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23

Before:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None)

After:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider'])

Related:

TransAmMan commented 1 year ago

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

I can confirm that editing the following file works.

File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23

Before:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None)

After:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider'])

Related:

Can you help me implement this fix for a hosted GPU runtime?