xiaolaa2 / stable-diffusion-webui-wd14-tagger

修改版,能够本地读取wd-tagger的模型,you can load wd-tagger model locally
18 stars 1 forks source link

[bug]无法逆推,每次逆推都报错 #4

Open worisur opened 2 months ago

worisur commented 2 months ago

环境: version: v1.10.1 python: 3.10.14 torch: 2.1.2+cu121 xformers: N/A gradio: 3.41.2   checkpoint: 7c819b6d13

报错日志: Error completing request Arguments: (<PIL.Image.Image image mode=RGB size=4160x6240 at 0x71518E128F10>, '', False, '', '[name].[output_extension]', 'ignore', False, False, 'wd14-convnext-v2', 0.35, '', '', False, False, True, '00, (o)(o), ++, +-, .., , <|><|>, ==, >_<, 3_3, 6_9, >o, @@, ^_^, o_o, u_u, xx, ||, ||_||', False, False) {} Traceback (most recent call last): File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/modules/call_queue.py", line 74, in f res = list(func(*args, kwargs)) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/modules/call_queue.py", line 53, in f res = func(*args, *kwargs) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f res = func(args, kwargs) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/tagger/ui.py", line 70, in on_interrogate ratings, tags = interrogator.interrogate(image) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/tagger/interrogator.py", line 283, in interrogate self.load() File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/tagger/interrogator.py", line 268, in load self.model = InferenceSession(str(model_path), providers=providers) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 480, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/models/wd-tagger/wd-v1-4-convnext-tagger-v2/model.onnx failed:Protobuf parsing failed.


Traceback (most recent call last): File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1434, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1297, in postprocess_data self.validate_outputs(fn_index, predictions) # type: ignore File "/home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1272, in validate_outputs raise ValueError( ValueError: An event handler (on_interrogate) didn't receive enough output values (needed: 4, received: 3). Wanted outputs: [textbox, label, label, html] Received outputs: [None, "", "

InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /home/1.worisur/1.tools/1.ai/01.sd/stable-diffusion-webui/models/wd-tagger/wd-v1-4-convnext-tagger-v2/model.onnx failed:Protobuf parsing failed.

Time taken: 0.0 sec.

A: 2.03 GB, R: 2.03 GB, Sys: 4.7/15.5977 GB (30.2%)

"]

xiaolaa2 commented 1 month ago

兄弟,我现在不再搞这个了,你可能需要尝试一下其它人的了,不过看你的报错我个人感觉可能是依赖的版本问题