Open shiverwang76 opened 4 months ago
See #6
See #6
Thank you, I successfully entered the gradio screen, but when running step1, I still get the error message as follows:
Traceback (most recent call last): File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\queueing.py", line 528, in process_events response = await route_utils.call_process_api( File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api output = await app.get_blocks().process_api( File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1908, in process_api result = await self.call_function( File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\blocks.py", line 1485, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\anyio_backends_asyncio.py", line 859, in run result = context.run(func, args) File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\gradio\utils.py", line 808, in wrapper response = f(args, kwargs) File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "D:\paints-undo\gradio_app.py", line 115, in interrogator_process return wd14tagger.default_interrogator(x) File "D:\paints-undo\wd14tagger.py", line 48, in default_interrogator model = InferenceSession(model_onnx_filename, providers=['CPUExecutionProvider']) File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 472, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from ./wd-v1-4-moat-tagger-v2.onnx failed:Protobuf parsing failed.**
Strangely, step2 and step3 work fine.
when running python gradio_app.py, I encountered this error message:
Traceback (most recent call last): File "D:\Paints-UNDO\gradio_app.py", line 15, in
import memory_management
File "D:\Paints-UNDO\memory_management.py", line 9, in
torch.zeros((1, 1)).to(gpu, torch.float32)
File "C:\Users\username\anaconda3\envs\paints_undo\lib\site-packages\torch\cuda__init__.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(paints_undo) PS D:\Paints-UNDO> import torch
How to solve it?