Closed gilroff closed 11 months ago
Yes, it's probably due to a not working optimization. 1.2.1 should fix it.
Oh all good then. thanks for the extension :)
@glucauze
I have an issue after installing the extension. Tried to update and reinstall, but still same problem.
I'm using colab script from https://github.com/TheLastBen/fast-stable-diffusion
** Error loading script: faceswaplab.py
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "
This shouldn't happen, I think there's an error during installation. You can have a look at the general note on this subject : #36
Describe the bug I get this error on webui on colab and it doesn't work:
85% 17/20 [00:05<00:00, 3.98it/s] 90% 18/20 [00:06<00:00, 4.11it/s] 95% 19/20 [00:06<00:00, 3.96it/s] 100% 20/20 [00:06<00:00, 3.01it/s] 2023-08-06 08:47:15,216 - FaceSwapLab - INFO - Try to use model : /content/sdw/models/faceswaplab/inswapper_128.onnx 2023-08-06 08:47:15,272 - FaceSwapLab - INFO - Load analysis model, will take some time. (> 30s) Loading analysis model (first time is slow): 100% 1/1 [00:08<00:00, 8.49s/model] 2023-08-06 08:47:23,760 - FaceSwapLab - INFO - ("Applied providers: ['CPUExecutionProvider'], with options: " "{'CPUExecutionProvider': {}}\n" 'find model: ' '/content/sdw/models/faceswaplab/analysers/models/buffalo_l/1k3d68.onnx ' "landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0\n" "Applied providers: ['CPUExecutionProvider'], with options: " "{'CPUExecutionProvider': {}}\n" 'find model: ' '/content/sdw/models/faceswaplab/analysers/models/buffalo_l/2d106det.onnx ' "landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0\n" "Applied providers: ['CPUExecutionProvider'], with options: " "{'CPUExecutionProvider': {}}\n" 'find model: ' '/content/sdw/models/faceswaplab/analysers/models/buffalo_l/det_10g.onnx ' "detection [1, 3, '?', '?'] 127.5 128.0\n" "Applied providers: ['CPUExecutionProvider'], with options: " "{'CPUExecutionProvider': {}}\n" 'find model: ' '/content/sdw/models/faceswaplab/analysers/models/buffalo_l/genderage.onnx ' "genderage ['None', 3, 96, 96] 0.0 1.0\n" "Applied providers: ['CPUExecutionProvider'], with options: " "{'CPUExecutionProvider': {}}\n" 'find model: ' '/content/sdw/models/faceswaplab/analysers/models/buffalo_l/w600k_r50.onnx ' "recognition ['None', 3, 112, 112] 127.5 127.5\n") 2023-08-06 08:47:23,761 - FaceSwapLab - ERROR - Failed to swap face in postprocess method : This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...) Traceback (most recent call last): File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab.py", line 178, in postprocess swapped_images = swapper.process_images_units( File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 777, in process_images_units swapped = process_image_unit(model, units[0], image, info, force_blend) File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 650, in process_image_unit faces = get_faces(pil_to_cv2(image)) File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 372, in get_faces face_analyser = copy.deepcopy(getAnalysisModel()) File "/usr/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, rv) File "/usr/lib/python3.10/copy.py", line 271, in _reconstruct state = deepcopy(state, memo) File "/usr/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/usr/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/usr/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, rv) File "/usr/lib/python3.10/copy.py", line 271, in _reconstruct state = deepcopy(state, memo) File "/usr/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/usr/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/usr/lib/python3.10/copy.py", line 273, in _reconstruct y.setstate(state) File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 33, in setstate self.init(model_path) File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 25, in init super().init(model_path, **kwargs) File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 375, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Desktop (please complete the following information):