Whenever I try and generate an image I get this error it says to type GetDeviceRemovedReason to figure out what to do but I can type in the CMD window. What should I do here?
`2023-01-31 17:21:35.2587704 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2023-01-31 17:21:39.6777490 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2023-01-31 17:21:41.6077952 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2023-01-31 17:21:42.3875262 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
0%| | 0/30 [00:00<?, ?it/s]2023-01-31 17:22:39.6907470 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_465' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(3) tid(3334) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.
0%| | 0/30 [00:10<?, ?it/s]
Traceback (most recent call last):
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict
output = await app.blocks.process_api(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\amd_webui.py", line 47, in txt2img
image = pipe(prompt,
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 274, in call
noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=text_embeddings)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\onnx_utils.py", line 61, in call
return self.model.run(None, inputs)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_465' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(3) tid(3334) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.
2023-01-31 17:22:54.1506880 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyFromHost node. Name:'Memcpy_token_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(6) tid(3334) 887A0005 The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action.
Traceback (most recent call last):
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict
output = await app.blocks.process_api(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\amd_webui.py", line 47, in txt2img
image = pipe(prompt,
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 235, in call
text_embeddings = self._encode_prompt(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 150, in _encode_prompt
text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\onnx_utils.py", line 61, in call
return self.model.run(None, inputs)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyFromHost node. Name:'Memcpy_token_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(6) tid(3334) 887A0005 The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action.`
Whenever I try and generate an image I get this error it says to type GetDeviceRemovedReason to figure out what to do but I can type in the CMD window. What should I do here?
`2023-01-31 17:21:35.2587704 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider. 2023-01-31 17:21:39.6777490 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider. 2023-01-31 17:21:41.6077952 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider. 2023-01-31 17:21:42.3875262 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider. 0%| | 0/30 [00:00<?, ?it/s]2023-01-31 17:22:39.6907470 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_465' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(3) tid(3334) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.
0%| | 0/30 [00:10<?, ?it/s] Traceback (most recent call last): File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict output = await app.blocks.process_api( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function prediction = await anyio.to_thread.run_sync( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\amd_webui.py", line 47, in txt2img image = pipe(prompt, File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 274, in call noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=text_embeddings) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\onnx_utils.py", line 61, in call return self.model.run(None, inputs) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_465' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(3) tid(3334) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.
2023-01-31 17:22:54.1506880 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyFromHost node. Name:'Memcpy_token_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(6) tid(3334) 887A0005 The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action.
Traceback (most recent call last): File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict output = await app.blocks.process_api( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function prediction = await anyio.to_thread.run_sync( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\amd_webui.py", line 47, in txt2img image = pipe(prompt, File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 235, in call text_embeddings = self._encode_prompt( File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 150, in _encode_prompt text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0] File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\onnx_utils.py", line 61, in call return self.model.run(None, inputs) File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyFromHost node. Name:'Memcpy_token_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(6) tid(3334) 887A0005 The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action.`