Closed masiv1001 closed 7 months ago
That is...very strange. I checked, the file is still being hosted by github in that repository, and deleting my local models and redownloading them with pcleaner worked as expected... Seeing as the error says that max retries was exceeded, that means it attempted to download the file, but couldn't establish a connection, perhaps due to an issue with the internet, maybe github experiencing a brief outage, perhaps?
Does the error still occur, or is it persistent? I'm not too sure what the cause could be, otherwise.
If you have dependency issues with pip, I can recommend getting pipx from pip and use that to install pcleaner. Pipx will place all that into it's own virtualenv on it's own, so no dependency issues. More space, yes, but at least you could then get CUDA, if you happen to have an nvidia GPU. (The binary doesn't contain cuda support, as that would add 5GB by itself, too large for github to host)
Hearing that the binary at least works on Fedora is great news in and of itself, since I've been meaning to build a flatpak for this app using said binary. It'd probably provide an even better user experience for you that way, were it not for this issue.
The errors keeps happening, I will try to use pipx instead of pip, a flatpak version would probably be the best packaging option, I don't know the details but being cross platform, and not dependant on each distro's packages sounds like a good deal, it would open the public being able to use the app (being in a repo like flathub would list is on all the distros that use it as a source)
Yup, I've gotten my previous package on Flathub already, so I'm familiar with the process, I just need to find the time since it's a rather long, annoying process (though second time should go smoother).
You could also manually download the models and install them to the paths listed by pcleaner. The links pcleaner provides should take you to the right place. The first one is a .pt.onnx (the onnx is the CPU variant) file you get in that repo, and the second one is the whole repo itself.
For the first one you will need to place the comictextdetector.pt.onnx in the .cache/pcleaner/model folder and also update the entry in the config file at .config/pcleaner/pcleanerrc. The entry is: default_cv2_model_path = /home/<username>/.cache/pcleaner/model/comictextdetector.pt.onnx
(You can also use some other path, just gotta set it correctly in the config.)
For the second model, which is a huggingface model, you can look at https://huggingface.co/docs/hub/models-downloading, though I have never tried it myself. It seems a lot more involved, cloning a whole repository and stuff, not just a simple file. Hopefully that one isn't an issue.
It's been a while, but I finally have that flatpak release I promised: https://flathub.org/apps/io.github.voxelcubes.panelcleaner
Let me know if that fixes the issue. Hopefully it's just some odd incompatibility that flatpak would now take care of.
I'll let you know asap, thanks for your efforts in packaging in flatpak! It really adds to user friendliness.
Thanks! Yeah, Flatpak most definitely will make it a lot more comfy as a desktop app. Pip just ain't meant for that. Since it's my second time making one, it went pretty smoothly, only taking 2 days, rather than a week like the first time.
Hi! I've got some updates on panel-cleaner, the app manages to open but it fails when trying to detect text, these are the logs:
Output generation failed:
<class 'RuntimeError'>: Traceback (most recent call last):
File "/app/lib/python3.11/site-packages/pcleaner/gui/worker_thread.py", line 141, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/pcleaner/gui/image_details_driver.py", line 468, in generate_output
prc.generate_output(
File "/app/lib/python3.11/site-packages/pcleaner/gui/processing.py", line 175, in generate_output
ctm.model2annotations_gui(
File "/app/lib/python3.11/site-packages/pcleaner/gui/ctd_interface_gui.py", line 120, in model2annotations_gui
process_image(
File "/app/lib/python3.11/site-packages/pcleaner/ctd_interface.py", line 165, in process_image
mask, mask_refined, blk_list = model(
^^^^^^
File "/app/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/pcleaner/comic_text_detector/inference.py", line 179, in __call__
blks = postprocess_yolo(blks, self.conf_thresh, self.nms_thresh, resize_ratio)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/pcleaner/comic_text_detector/inference.py", line 115, in postprocess_yolo
det = non_max_suppression(det, conf_thresh, nms_thresh)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/pcleaner/comic_text_detector/utils/yolov5_utils.py", line 263, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.11/site-packages/torchvision/ops/boxes.py", line 40, in nms
_assert_has_ops()
File "/app/lib/python3.11/site-packages/torchvision/extension.py", line 46, in _assert_has_ops
raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
Is that with the flatpak? Seems more like you're using pip and somehow ended up with incompatible torch and torchvision packages. What does pip list
say in that case?
I'm using the flatpak version, this is the output of pip list
aiofiles 23.2.1
aiohttp 3.9.1
aiosignal 1.3.1
anyio 4.2.0
appdirs 1.4.4
argcomplete 2.0.0
attrs 23.2.0
Babel 2.12.1
Beaker 1.12.1
beautifulsoup4 4.12.2
blivet 3.8.2
blivet-gui 2.4.2
Brlapi 0.8.5
Brotli 1.1.0
bs4 0.0.1
certifi 2023.5.7
cffi 1.15.1
chardet 5.2.0
charset-normalizer 3.2.0
click 8.1.3
contourpy 1.2.0
cryptography 41.0.3
cupshelpers 1.0
cycler 0.11.0
dasbus 1.7
dbus-python 1.3.2
decorator 5.1.1
distro 1.8.0
dlib 19.24.2
dnf 4.18.2
dnspython 2.3.0
evdev 1.6.1
fedora-third-party 0.10
file-magic 0.4.0
fonttools 4.42.0
fros 1.1
frozenlist 1.4.1
fs 2.4.16
gpg 1.20.0
gssapi 1.7.3
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpcore 1.0.2
httpx 0.26.0
humanize 3.13.1
hyperframe 6.0.1
idna 3.4
kiwisolver 1.4.5
koji 1.33.1
langtable 0.0.64
libcomps 0.1.20
libdnf 0.72.0
libevdev 0.11
libvirt-python 9.7.0
lutris 0.5.14
lxml 4.9.3
Mako 1.2.3
mangadex-downloader 2.10.3
MarkupSafe 2.1.3
matplotlib 3.8.2
meson 1.2.3
moddb 0.8.1
multidict 6.0.4
munkres 1.1.2
mutagen 1.46.0
nftables 0.1
niconico-dl 2.2.8
numpy 1.24.4
olefile 0.46
opencv 4.8.1
packaging 23.1
Paste 3.5.3
pathvalidate 3.2.0
pexpect 4.8.0
pid 2.2.3
Pillow 9.5.0
pip 23.2.1
ply 3.11
pooch 1.5.2
productmd 1.38
progressbar2 3.53.2
protobuf 3.19.6
ptyprocess 0.7.0
pwquality 1.4.5
pycairo 1.25.1
pycparser 2.20
pycrypto 2.6.1
pycryptodomex 3.19.0
pycups 2.0.1
pycurl 7.45.2
pyenchant 3.2.2
pygit2 1.13.3
PyGObject 3.46.0
PyJWT 2.8.0
pykickstart 3.48
pyOpenSSL 23.2.0
pyparsing 3.0.9
pyparted 3.13.0
pypresence 4.3.0
pyrate-limiter 3.1.0
PySocks 1.7.1
python-augeas 1.1.0
python-dateutil 2.8.2
python-meh 0.51
python-utils 3.3.3
pytz 2023.3.post1
pyudev 0.24.1
pyxdg 0.27
PyYAML 6.0.1
regex 2023.10.3
requests 2.31.0
requests-doh 0.3.1
requests-file 1.5.1
requests-ftp 0.3.1
requests-gssapi 1.2.3
requests-toolbelt 0.10.1
rpm 4.19.1
rpmautospec 0.3.8
SciPy 1.11.1
selinux 3.5
sepolicy 3.5
setools 4.4.3
setuptools 67.7.2
simpleaudio 1.0.4
simpleline 1.9.0
six 1.16.0
sniffio 1.3.0
sos 4.6.0
soupsieve 2.5
systemd-python 235
Tempita 0.5.2
toolz 0.12.0
tqdm 4.66.1
urllib3 1.26.18
websockets 11.0.3
yarl 1.9.4
youtube-dl 2021.12.17
yt-dlp 2023.10.7
So no mention of torchvision, I have an intel iGPU by the way, I don't know if that changes things or not.
Ah yeah, the paths in your error mention paths beginning with /app/
which means flatpak, then it uses bundled dependencies that I collected hashes for, so there is no way the dependency version could be wrong. Pip will list the system packages, so not applicable for flatpak, those are entirely separate. Also, the gpu shouldn't matter, as it only uses the CPU.
Maybe the logs will reveal more information? Since it launches, you can find the open log button in the bottom right. Just whatever comes after the last startup should do. I'm really not sure.
Ah, I tried updating the flatpak and now it gives me the same issue. Working on it. Thanks for bringing this to my attention.
I think I figured it out, there is an error message right at the start that torch was compiled without libjpeg or so, which probably caused these c++ functions to be missing. When using the pip link from the torch website, pip uses a slightly different version than doing it manually. That didn't have the issue. Working on updating the flatpak now.
This is now fixed, as soon as version 2.2.0 propagates to the stable repo. You can also try the test version:
flatpak install --user https://dl.flathub.org/build-repo/79740/io.github.voxelcubes.panelcleaner.flatpakref
Thanks again, hopefully it will finally work for you. :)
It does work!!! No need to thank me! Thanks for your hard work and making this possible :D
Thank you for letting me know about the error with the flatpak. I had tested it myself and it worked just fine, and since flatpak builds use file hashes to ensure the build is always exactly the same, I figured it must've been working. Without you, I wouldn't have found out about the flatpak being broken for quite a while longer. I still don't know why what I installed locally didn't have that issue. Someone else raised the same issue a week ago on the flathub panel cleaner package repo, but I didn't get a notification for that, poor guy.
I'm using Fedora 39, using the binaries provided (I could not get it working using pip nor docker, they gave dependency errors). Everything works until I click OK and proceed with model downloading.
Logs after launching: