mike9251 / simswap-inference-pytorch

Unofficial Pytorch implementation (inference only) of the SimSwap: An Efficient Framework For High Fidelity Face Swapping
93 stars 20 forks source link

OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. #17

Open SerZhyAle opened 1 year ago

SerZhyAle commented 1 year ago

(myenv) c:_N\simswap-inference-pytorch>streamlit run app_web.py

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501 Network URL: http://192.168.1.70:8501

OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/

SerZhyAle commented 1 year ago

streamlit run app_web.py

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501 Network URL: http://192.168.1.70:8501

A new version of Streamlit is available.

See what's new at https://discuss.streamlit.io/c/announcements

Enter the following command to upgrade: $ pip install streamlit --upgrade

C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:3484.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 2023-04-23 02:04:04.123 Uncaught app exception Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 589, in get_or_create_cached_value return_value = _read_from_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 349, in _read_from_cache raise e File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 334, in _read_from_cache return _read_from_mem_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 252, in _read_from_mem_cache raise CacheKeyNotFoundError("Key not found in mem cache") streamlit.runtime.legacy_caching.caching.CacheKeyNotFoundError: Key not found in mem cache

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 563, in _run_script exec(code, module.dict) File "C:_N\simswap-inference-pytorch-main\app_web.py", line 163, in model = load_model(config) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 623, in wrapped_func return get_or_create_cached_value() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 607, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "C:_N\simswap-inference-pytorch-main\app_web.py", line 115, in load_model return SimSwap( File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 42, in init self.set_parameters(config) File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 118, in set_parameters self.set_smooth_mask_kernel_size(config.smooth_mask_kernel_size) File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 169, in set_smooth_mask_kernel_size self.re_initialize_soft_mask() File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 160, in re_initialize_soft_mask self.smooth_mask = SoftErosion(kernel_size=self.smooth_mask_kernel_size, File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\nn\modules\module.py", line 844, in _apply self._buffers[key] = fn(buf) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled 2023-04-23 02:04:08.845 Uncaught app exception Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 589, in get_or_create_cached_value return_value = _read_from_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 349, in _read_from_cache raise e File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 334, in _read_from_cache return _read_from_mem_cache( File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 252, in _read_from_mem_cache raise CacheKeyNotFoundError("Key not found in mem cache") streamlit.runtime.legacy_caching.caching.CacheKeyNotFoundError: Key not found in mem cache

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 563, in _run_script exec(code, module.dict) File "C:_N\simswap-inference-pytorch-main\app_web.py", line 163, in model = load_model(config) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 623, in wrapped_func return get_or_create_cached_value() File "C:_N\Anaconda3\envs\myenv\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 607, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "C:_N\simswap-inference-pytorch-main\app_web.py", line 115, in load_model return SimSwap( File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 42, in init self.set_parameters(config) File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 118, in set_parameters self.set_smooth_mask_kernel_size(config.smooth_mask_kernel_size) File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 169, in set_smooth_mask_kernel_size self.re_initialize_soft_mask() File "C:_n\simswap-inference-pytorch-main\src\simswap.py", line 160, in re_initialize_soft_mask self.smooth_mask = SoftErosion(kernel_size=self.smooth_mask_kernel_size, File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\nn\modules\module.py", line 844, in _apply self._buffers[key] = fn(buf) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:_N\Anaconda3\envs\myenv\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled