Closed wensleyoliv closed 6 months ago
Checking your commit I guess that now the v2 of the swin tagger is broken.
Checking your commit I guess that now the v2 of the swin tagger is broken.
Have you updated the dependencies from requirements.txt? Post the log with the error, since I don’t have a similar problem.
I did, that's strange
Oh, I didn't see the part where you said "post the log with the error". Well there's no error, just at the moment of tagging the tagger process is killed when using swinv2, tried redownloading but is still the same
Oh, I didn't see the part where you said "post the log with the error". Well there's no error, just at the moment of tagging the tagger process is killed when using swinv2, tried redownloading but is still the same
Looks like I found another dependent package, the wrong version of which was causing the crash. Try reinstalling the dependencies from requirements.txt again with a new commit.
but onnxruntime
is the runtime for CPU only, does work but iy defaults to CPU, also it's not needed with onnxruntime-gpu
but
onnxruntime
is the runtime for CPU only, does work but iy defaults to CPU, also it's not needed withonnxruntime-gpu
Initially, I thought so too, but an experiment on another computer showed the opposite. The following packages were installed on the computer:
onnx==1.15.0
onnxruntime==1.15.0
onnxruntime-gpu=1.17.1
After trying to start generating tags, interrogator_rpc simply exited without any errors. Then I ran the command pip install --upgrade onnxruntime
which updated onnxruntime to version 1.17.1 and tag generation worked without any problems.
It works but installing onnxruntime
will override the providers. I used this piece of code:
import onnxruntime as ort
print(ort.get_available_providers())
Output was ["AzureExecutionProvider", "CPUExecutionProvider"]
. After uninstalling onnxruntime
I ran the program again and the output now was ["CUDAExecutionProvider", "CPUExecutionProvider"]
It works but installing
onnxruntime
will override the providers. I used this piece of code:import onnxruntime as ort print(ort.get_available_providers())
Output was
["AzureExecutionProvider", "CPUExecutionProvider"]
. After uninstallingonnxruntime
I ran the program again and the output now was["CUDAExecutionProvider", "CPUExecutionProvider"]
Yes, unfortunately this is a problem that I don’t know how to solve. The onnxruntime update was needed to support models with ONNX ML opset version 4. Onnxruntime 1.17.1 works with both version 3 and version 4, but onnxruntime-gpu 1.17.1 crashes when trying to use models with version 3. Perhaps this is a bug in onnxruntime-gpu itself. I couldn't find any WD Tagger implementations that support the new models. If you know repositories with working code, please write links to them.
in my case onxxruntime-gpu
only crashes when using swin v2 tagger, others seems to work fine
I would do it in other way but i guess that will work
I would do it in other way but i guess that will work
It seems to me that this is a bug in onnxruntime-gpu, which is why I made this exception. I hope this will be fixed in future releases and I will remove this exception.
Yea, I understand that and its a good idea. What I was thinking of, its since v2 taggers can be ran using onnx or keras you could use keras but it isn't worth for just one tagger, I too hope that a future release of onnxruntime-gpu
fixes it!
SmilingWolf Just released new interrogator models with updated datasets, can you add them to BooruDatasetTagManager interrogator?
https://huggingface.co/SmilingWolf/wd-vit-tagger-v3 https://huggingface.co/SmilingWolf/wd-swinv2-tagger-v3 https://huggingface.co/SmilingWolf/wd-convnext-tagger-v3