se1exin / Cleanarr

A simple UI to help find and delete duplicate and sample files from your Plex server
https://hub.docker.com/r/selexin/cleanarr
MIT License
218 stars 18 forks source link

Failed to load content! Please check your Plex settings and try again. #57

Closed MatthewH12 closed 1 year ago

MatthewH12 commented 2 years ago

I have verified the Plex IP and Token are correct, multiple times, but no luck. Any ideas?

Docker Command: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Cleanarr' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="HomeSRV" -e HOST_CONTAINERNAME="Cleanarr" -e 'PLEX_BASE_URL'='http://192.168.1.20:32400' -e 'PLEX_TOKEN'='REMOVEDREMOVED' -e 'LIBRARY_NAMES'='Movies' -e 'BYPASS_SSL_VERIFY'='1' -e 'PAGE_SIZE'='2' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:80]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/Alphacosmos/unraid-templetes/main/Images/plex-library-cleaner.ico' -p '5000:80/tcp' -v '/mnt/user/appdata/plex-library-cleaner':'/config':'rw' 'selexin/cleanarr' be729993ea2b2b1ebf624234b23ceb2cef80efacdc92d178a3903aabc3298a08

The command finished successfully!

Docker Log:

Run migrations

alembic upgrade head

/usr/lib/python2.7/dist-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security. 'Supervisord is running as root and it is searching ' 2022-05-23 16:23:22,899 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message. 2022-05-23 16:23:22,899 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing 2022-05-23 16:23:22,908 INFO RPC interface 'supervisor' initialized 2022-05-23 16:23:22,908 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2022-05-23 16:23:22,908 INFO supervisord started with pid 1 2022-05-23 16:23:23,910 INFO spawned: 'quit_on_failure' with pid 10 2022-05-23 16:23:23,911 INFO spawned: 'nginx' with pid 11 2022-05-23 16:23:23,912 INFO spawned: 'uwsgi' with pid 12 2022-05-23 16:23:23,914 INFO success: nginx entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2022-05-23 16:23:23,914 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) [uWSGI] getting INI configuration from /app/uwsgi.ini [uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini

;uWSGI instance configuration [uwsgi] cheaper = 2 processes = 16 ini = /app/uwsgi.ini module = main callable = app ini = /etc/uwsgi/uwsgi.ini socket = /tmp/uwsgi.sock chown-socket = nginx:nginx chmod-socket = 664 hook-master-start = unix_signal:15 gracefully_kill_them_all need-app = true die-on-term = true show-config = true ;end of configuration

Starting uWSGI 2.0.20 (64bit) on [Mon May 23 16:23:23 2022] compiled with version: 8.3.0 on 21 April 2022 12:42:40 os: Linux-5.15.40-Unraid #1 SMP Mon May 16 10:05:44 PDT 2022 nodename: 8f7cbbabeca2 machine: x86_64 clock source: unix pcre jit disabled detected number of CPU cores: 8 current working directory: /app detected binary path: /usr/local/bin/uwsgi your processes number limit is 127338 your memory page size is 4096 bytes detected max file descriptor number: 40960 lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3 uWSGI running as root, you can use --uid/--gid/--chroot options WARNING: you are running uWSGI as root !!! (use the --uid flag) Python version: 3.7.13 (default, Apr 20 2022, 19:13:06) [GCC 8.3.0] Python threads support is disabled. You can enable it with --enable-threads Python main interpreter initialized at 0x557db2f3bad0 uWSGI running as root, you can use --uid/--gid/--chroot options WARNING: you are running uWSGI as root !!! (use the --uid flag) your server socket listen backlog is limited to 100 connections your mercy for graceful operations on workers is 60 seconds mapped 1239640 bytes (1210 KB) for 16 cores Operational MODE: preforking WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x557db2f3bad0 pid: 12 (default app) uWSGI running as root, you can use --uid/--gid/--chroot options WARNING: you are running uWSGI as root !!! (use the --uid flag) uWSGI is running in multiple interpreter mode spawned uWSGI master process (pid: 12) spawned uWSGI worker 1 (pid: 16, cores: 1) spawned uWSGI worker 2 (pid: 17, cores: 1) running "unix_signal:15 gracefully_kill_them_all" (master-start)... 2022-05-23 16:23:25,313 INFO success: quit_on_failure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-" [pid: 16|app: 0|req: 1/1] 192.168.1.231 () {44 vars in 2697 bytes} [Mon May 23 16:23:25 2022] GET /server/deleted-sizes => generated 13 bytes in 15 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0) 192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET /server/deleted-sizes HTTP/1.1" 200 13 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-" [pid: 16|app: 0|req: 2/2] 192.168.1.231 () {44 vars in 2679 bytes} [Mon May 23 16:23:25 2022] GET /server/info => generated 79 bytes in 7 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0) 192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET /server/info HTTP/1.1" 200 79 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-" [pid: 16|app: 0|req: 3/3] 192.168.1.231 () {44 vars in 2697 bytes} [Mon May 23 16:23:25 2022] GET /server/deleted-sizes => generated 13 bytes in 7 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0) 192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET /server/deleted-sizes HTTP/1.1" 200 13 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-" 2022/05/23 16:24:25 [error] 13#13: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.231, server: , request: "GET /content/dupes HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock", host: "192.168.1.20:5000", referrer: "http://192.168.1.20:5000/" 192.168.1.231 - - [23/May/2022:16:24:25 -0700] "GET /content/dupes HTTP/1.1" 504 569 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-"

se1exin commented 2 years ago

Hi @MatthewH12, thanks for reporting the issue! This looks similar to https://github.com/se1exin/Cleanarr/issues/55 - are you using the standard docker installation, or are you using an alternative docker environment (e.g LXC)?

MatthewH12 commented 2 years ago

Hi @MatthewH12, thanks for reporting the issue! This looks similar to #55 - are you using the standard docker installation, or are you using an alternative docker environment (e.g LXC)?

I'm using Docker on UnRAID, so I am assuming it's standard docker.

dbara commented 2 years ago

Hi se1exin, I'm facing the same issue. Around 7100 Movies, the TV shows are no problem. I have no Idea what's wrong, I already have been searching a long time without any solution. Whar informations do you need from me? It's the docker image from within UnRaid-Apps.

se1exin commented 2 years ago

Hi @dbara @MatthewH12 I haven't confirmed yet, but this issue might be related to running on UnRAID specifically. I don't use UnRAID but I can set it up on a spare computer to debug next chance I get.

Which operating system are you running UnRAID on? (It looks to support Mac and Windows, but not linux)

Killerherts commented 2 years ago

I don't think the issue is with UNRAID as i just copied my compose file from unraid and moved it to me linux laptop and still got the failed to load content message. Looks like the web app is just dropping the connection after certain number of duplicates.


cleanarr    | plexwrapper  2022-08-11 20:37:48,777 DEBUG    plexwrapper.py:get_dupe_content Found media: plex://movie/5d7768374de0ee001fccc076
cleanarr    | database     2022-08-11 20:37:48,777 DEBUG    database.py:get_ignored_item content_key /library/metadata/162809
cleanarr    | plexwrapper  2022-08-11 20:37:49,574 DEBUG    plexwrapper.py:get_dupe_content Get results from offset 150 to limit 200
cleanarr    | Thu Aug 11 20:37:49 2022 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /content/dupes (ip 10.16.69.116) !!!
cleanarr    | Thu Aug 11 20:37:49 2022 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /content/dupes (10.16.69.116)
cleanarr    | OSError: write error
cleanarr    | [pid: 17|app: 0|req: 1/5] 10.16.69.116 () {44 vars in 713 bytes} [Thu Aug 11 20:35:38 2022] GET /content/dupes => generated 0 bytes in 130863 msecs (HTTP/1.1 200) 3 headers in 0 bytes (0 switches on core 0)```
actuallymentor commented 2 years ago

Chiming in to say that I have the same issue, but it does not appear if I select only one library. When I select multiple, in the logs I see a very weird entry of an unauthorized on 192.168.1.148. That ip however is nowhere in my configs. There is no such ip known on my network.

dbara commented 2 years ago

Hi @se1exin, I'm running UnRAID as base OS. I managed to solve it somehow, not sure what specifically the issue was. I deleted some folders manually which had a "deep structure" and some ADMIN folders within. Also I made a new Plex instance and deleted with cleanarr while Plex was still importing. In total I deleted around 1-2TB of Data, now it runs smootly. I'll start adding more when I bought a new HDD, but currently that seemes to work.

Tl;dr: Deep folder structure and ADMIN folders within some folders deleted. Now it works

dbara commented 1 year ago

Ok, issue is back again. Structure of media is clean, 9000 items in it. A different library works.

Tue Dec 27 10:49:38 2022 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /content/dupes (192.168.1.27) OSError: write error [pid: 17|app: 0|req: 2/8] 192.168.1.27 () {42 vars in 920 bytes} [Tue Dec 27 10:47:54 2022] GET /content/dupes => generated 0 bytes in 103951 msecs (HTTP/1.1 200) 3 headers in 0 bytes (0 switches on core 0) 192.168.1.27 - - [27/Dec/2022:10:49:41 -0800] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" "-" [pid: 16|app: 0|req: 7/9] 192.168.1.27 () {42 vars in 934 bytes} [Tue Dec 27 10:49:41 2022] GET /server/deleted-sizes => generated 24 bytes in 3 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0) 192.168.1.27 - - [27/Dec/2022:10:49:41 -0800] "GET /server/deleted-sizes HTTP/1.1" 200 24 "http://192.168.1.104:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" "-" [pid: 16|app: 0|req: 8/10] 192.168.1.27 () {42 vars in 916 bytes} [Tue Dec 27 10:49:41 2022] GET /server/info => generated 76 bytes in 3 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0) 192.168.1.27 - - [27/Dec/2022:10:49:41 -0800] "GET /server/info HTTP/1.1" 200 76 "http://192.168.1.104:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" "-" [pid: 16|app: 0|req: 9/11] 192.168.1.27 () {42 vars in 934 bytes} [Tue Dec 27 10:49:41 2022] GET /server/deleted-sizes => generated 24 bytes in 4 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0) 192.168.1.27 - - [27/Dec/2022:10:49:41 -0800] "GET /server/deleted-sizes HTTP/1.1" 200 24 "http://192.168.1.104:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" "-" 2022/12/27 10:50:41 [error] 13#13: *8 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.27, server: , request: "GET /content/dupes HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock", host: "192.168.1.104:5000", referrer: "http://192.168.1.104:5000/" 192.168.1.27 - - [27/Dec/2022:10:50:41 -0800] "GET /content/dupes HTTP/1.1" 504 569 "http://192.168.1.104:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36" "-" Tue Dec 27 10:51:25 2022 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /content/dupes (ip 192.168.1.27) !!! Tue Dec 27 10:51:25 2022 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /content/dupes (192.168.1.27)

lance-tek commented 1 year ago

uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /content/dupes (192.168.1.27)

I'm seeing the same thing on some rather large libraries. It works fine on multiple smaller libraries though. I'm not sure of the cause since I've never gotten it to load either of these two libraries. They are tens of TB each with thousands of files to scan through. the smaller ones seem to not have this problem. But it could be file/folder names or something for all I know too...

dbara commented 1 year ago

Yeah, I simply don't get it. I believe cleanarr relies on the plex dupe finder and then lists up the files. As plex can show me the dupes it's strange that cleanarr can't. Unfortunately plex only shows that there are dupes, but not how big they are, neither resolution etc.. I totally love cleanarr and I'm still hoping it will work again.

Killerherts commented 1 year ago

Just use filebot it has a cmd line tool for this and is worth the purchase, it can auto resolve any conflicts as well.

dbara commented 1 year ago

Hi @Killerherts, I'd like to rely on Plex dupes identifier, as it only matches the same. Filebot would also match similar file names, for example die hard 1 and die hard 2 etc. I already tried filebot once, but wasn't pleased at all. As I recall filebot can't show me the resolution of the files or the used codec, or am I wrong?

Please let me know if I simply don't know how to use filebot ;) My approach would be someting like deleting the biggest file if the resolutions are the same and preferably keep h265.

snickers2k commented 1 year ago

@lance-tek you are right. i have the same problem. as soon as i changed the library to a smaller one, it worked.

@se1exin why not simply extend the timeout? also i'm running ubuntu, not unraid

peter-mcconnell commented 1 year ago

Just posted this which seems relevant here also: https://github.com/se1exin/Cleanarr/issues/55#issuecomment-1454916176

jbeck22 commented 1 year ago

Just use filebot it has a cmd line tool for this and is worth the purchase, it can auto resolve any conflicts as well.

Do you happen to have your CLI command that you run to do this? I have purchased the yearly license for FileBot, but I can't get it to find any duplicates....plex tells me I have 138 :(