nandyalu / trailarr

Trailarr is a Docker application to download and manage trailers for your media library. It integrates with your existing services, such as Plex, Radarr, and Sonarr!
GNU General Public License v3.0
91 stars 6 forks source link

[Bug]Six downloads in five hours #36

Open Starminder opened 1 week ago

Starminder commented 1 week ago

Describe the bug Going very slow, now appears to have stopped.

Steps To Reproduce Check about: Statistics Movies 4039 Movies Monitored 4038 Series 1540 Series Monitored 1535 Trailers downloaded 6

Actual behavior Technically it is working, but should be faster than manually downloading.

Expected behavior Set it and forget it. Maybe get a report emailed once a day with relevant data.

Screenshots N/A

App Information (please complete the following information): Will update

Additional context Future

Starminder commented 1 week ago

It ran all night. I rebooted server. Rebooting caused almost all configuration to go back to default, wiped out radarr/sonarr connections. After 4 hours: Statistics Movies 4044 Movies Monitored 3937 Series 1540 Series Monitored 1535 Trailers downloaded 0

nandyalu commented 1 week ago

Can you post your docker config and some logs?

Starminder commented 1 week ago

Portainer Stack:

services: trailarr: image: nandyalu/trailarr:latest container_name: trailarr environment:

Starminder commented 1 week ago

Logs:

2024-09-03T08:24:34-0400 [INFO|main|L092]: Main: Client #53535 connected!

2024-09-03T08:23:04-0400 [INFO|trailer|L255]: TrailersDownloader: Downloading trailer for '[1839]A Bit of Light'...

2024-09-03T08:23:04-0400 [INFO|trailer|L263]: TrailersDownloader: Trailer download failed for '[1838]Madame Web'

2024-09-03T08:23:04-0400 [ERROR|trailer|L144]: TrailersDownloader: Failed to move trailer to folder: [Errno 2] No such file or directory: ''

2024-09-03T08:19:46-0400 [INFO|base|L144]: Job "Arr Data Refresh (trigger: interval[1:00:00], next run at: 2024-09-03 09:18:30 EDT)" executed successfully

2024-09-03T08:19:46-0400 [INFO|api_refresh|L028]: APIRefreshTasks: API Refresh completed!

2024-09-03T08:19:46-0400 [INFO|image_refresh|L019]: ImageRefreshTasks: Series Images refresh complete!

2024-09-03T08:19:42-0400 [INFO|image_refresh|L013]: ImageRefreshTasks: Refreshing images in the system

2024-09-03T08:19:42-0400 [INFO|api_refresh|L046]: APIRefreshTasks: Data refreshed for connection: Radarr

2024-09-03T08:19:23-0400 [INFO|base|L072]: Media: 0 Created, 0 Updated.

2024-09-03T08:18:46-0400 [INFO|api_refresh|L032]: APIRefreshTasks: Refreshing data from API for connection: Radarr

2024-09-03T08:18:46-0400 [INFO|api_refresh|L046]: APIRefreshTasks: Data refreshed for connection: Sonarr

2024-09-03T08:18:39-0400 [INFO|base|L072]: Media: 0 Created, 1 Updated.

2024-09-03T08:18:39-0400 [WARNING|base|L990]: Execution of job "Download Missing Trailers (trigger: interval[1:00:00], next run at: 2024-09-03 08:18:39 EDT)" skipped: maximum number of running instances reached (1)

2024-09-03T08:18:30-0400 [INFO|api_refresh|L032]: APIRefreshTasks: Refreshing data from API for connection: Sonarr

2024-09-03T08:18:30-0400 [INFO|api_refresh|L015]: APIRefreshTasks: Refreshing data from APIs

2024-09-03T08:18:30-0400 [INFO|base|L123]: Running job "Arr Data Refresh (trigger: interval[1:00:00], next run at: 2024-09-03 09:18:30 EDT)" (scheduled at 2024-09-03 08:18:30.897019-04:00)

2024-09-03T08:02:15-0400 [INFO|trailer|L255]: TrailersDownloader: Downloading trailer for '[1838]Madame Web'...

2024-09-03T08:02:15-0400 [INFO|trailer|L263]: TrailersDownloader: Trailer download failed for '[1837]The Zone of Interest'

2024-09-03T08:02:15-0400 [ERROR|trailer|L144]: TrailersDownloader: Failed to move trailer to folder: [Errno 2] No such file or directory: ''

2024-09-03T07:56:31-0400 [INFO|trailer|L255]: TrailersDownloader: Downloading trailer for '[1837]The Zone of Interest'...

2024-09-03T07:56:31-0400 [INFO|trailer|L263]: TrailersDownloader: Trailer download failed for '[1836]Asphalt City'

2024-09-03T07:56:31-0400 [ERROR|trailer|L144]: TrailersDownloader: Failed to move trailer to folder: [Errno 2] No such file or directory: ''

2024-09-03T07:43:25-0400 [INFO|trailer|L255]: TrailersDownloader: Downloading trailer for '[1836]Asphalt City'...

2024-09-03T07:43:25-0400 [INFO|trailer|L263]: TrailersDownloader: Trailer download failed for '[1835]Baltimore'

2024-09-03T07:43:25-0400 [ERROR|trailer|L144]: TrailersDownloader: Failed to move trailer to folder: [Errno 2] No such file or directory: ''

2024-09-03T07:33:26-0400 [INFO|trailer|L255]: TrailersDownloader: Downloading trailer for '[1835]Baltimore'...

2024-09-03T07:33:26-0400 [INFO|trailer|L263]: TrailersDownloader: Trailer download failed for '[1834]Horizon: An American Saga – Chapter 1'

2024-09-03T07:33:26-0400 [ERROR|trailer|L144]: TrailersDownloader: Failed to move trailer to folder: [Errno 2] No such file or directory: ''

2024-09-03T07:26:17-0400 [INFO|trailer|L255]: TrailersDownloader: Downloading trailer for '[1834]Horizon: An American Saga – Chapter 1'...

2024-09-03T07:26:17-0400 [INFO|trailer|L263]: TrailersDownloader: Trailer download failed for '[1833]The Stepdaughter'

2024-09-03T07:26:17-0400 [ERROR|trailer|L144]: TrailersDownloader: Failed to move trailer to folder: [Errno 2] No such file or directory: ''

2024-09-03T07:20:22-0400 [INFO|base|L144]: Job "Arr Data Refresh (trigger: interval[1:00:00], next run at: 2024-09-03 08:18:30 EDT)" executed successfully

2024-09-03T07:20:22-0400 [INFO|api_refresh|L028]: APIRefreshTasks: API Refresh completed!

2024-09-03T07:20:22-0400 [INFO|image_refresh|L019]: ImageRefreshTasks: Series Images refresh complete!

2024-09-03T07:20:17-0400 [INFO|image_refresh|L013]: ImageRefreshTasks: Refreshing images in the system

2024-09-03T07:20:17-0400 [INFO|api_refresh|L046]: APIRefreshTasks: Data refreshed for connection: Radarr

2024-09-03T07:19:58-0400 [INFO|base|L072]: Media: 0 Created, 2 Updated.

(Background on this error at: https://sqlalche.me/e/20/e3q8)

[parameters: ('/data/web/images/shows/fanart/2dc203fbe90405e69b98f588fc1cae73.jpg', 1419)]

[SQL: UPDATE media SET fanart_path=? WHERE media.id = ?]

(sqlite3.OperationalError) database is locked

sqlalchemy.exc.OperationalError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely)

cursor.execute(statement, parameters)

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 941, in do_execute

self.dialect.do_execute(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context

raise sqlalchemy_exception.with_traceback(exc_info[2]) from e

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2355, in _handle_dbapi_exception

self._handle_dbapi_exception(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context

^^^^^^^^^^^^^^^^^^^^^^^^^^

return self._exec_single_context(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context

^^^^^^^^^^^^^^^^^^^^^^

ret = self._execute_context(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1640, in _execute_clauseelement

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

return connection._execute_clauseelement(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection

^^^^^

return meth(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1418, in execute

^^^^^^^^^^^^^^^^^^^

c = connection.execute(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 912, in _emit_update_statements

_emit_update_statements(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/persistence.py", line 85, in save_obj

util.preloaded.orm_persistence.save_obj(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 642, in execute

rec.execute(self)

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/unitofwork.py", line 466, in execute

flush_context.execute()

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4448, in _flush

raise exc_value.with_traceback(exc_tb)

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/util/langhelpers.py", line 146, in exit

with util.safe_reraise():

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4487, in _flush

self._flush(objects)

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 4352, in flush

self.flush()

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 3050, in _autoflush

raise e.with_traceback(sys.exc_info()[2])

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 3061, in _autoflush

session._autoflush()

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 549, in orm_pre_session_exec

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

) = compile_state_cls.orm_pre_session_exec(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2226, in _execute_internal

^^^^^^^^^^^^^^^^^^^^^^^

return self._execute_internal(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2362, in execute

^^^^^^^^^^^^^^^^

return super().execute(

File "/usr/local/lib/python3.12/site-packages/sqlmodel/orm/session.py", line 127, in execute

session.execute(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/loading.py", line 694, in load_on_pk_identity

^^^^^^^^^^^

return db_load_fn(

File "/usr/local/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 3873, in _get_impl

Starminder commented 1 week ago

Statistics Movies 4046 Movies Monitored 3939 Series 1543 Series Monitored 1538 Trailers downloaded 0

Starminder commented 1 week ago

Stopped downloading, but ffmpeg was eating 100% CPU.

nandyalu commented 1 week ago

What CPU do you have on your machine?

Starminder commented 1 week ago

Quad-Core 10 nm Intel Celeron N5105 CPU

nandyalu commented 1 week ago

It could be possible that your CPU might be overloaded as it's an NUC

Starminder commented 1 week ago

It's an Asustor NAS.

Screenshot 2024-09-04 062925 Without Trailarr Screenshot 2024-09-04 063528 With Trailarr (FFMPEG)

I can try it on a newer machine, but right now FFMPEG is sucking all the CPU and the app isn't downloading anything, so I think that's really the first hurdle.

nandyalu commented 1 week ago

That's interesting... can you log into container and check what processes are running?

You can use ps -a command to check running processes

Starminder commented 1 week ago

PID USER TIME COMMAND 1 root 0:02 init 2 root 0:00 [kthreadd] 3 root 0:00 [rcu_gp] 4 root 0:00 [rcu_par_gp] 6 root 0:00 [kworker/0:0H-ev] 8 root 0:00 [mm_percpu_wq] 9 root 2:07 [ksoftirqd/0] 10 root 8:26 [rcu_sched] 11 root 0:01 [migration/0] 12 root 0:00 [cpuhp/0] 13 root 0:00 [cpuhp/1] 14 root 0:01 [migration/1] 15 root 2:11 [ksoftirqd/1] 17 root 0:00 [kworker/1:0H-ev] 18 root 0:00 [cpuhp/2] 19 root 0:01 [migration/2] 20 root 2:17 [ksoftirqd/2] 22 root 0:00 [kworker/2:0H-ev] 23 root 0:00 [cpuhp/3] 24 root 0:01 [migration/3] 25 root 4:35 [ksoftirqd/3] 27 root 0:14 [kworker/3:0H-kb] 28 root 0:00 [kdevtmpfs] 29 root 0:00 [netns] 30 root 0:00 [inet_frag_wq] 33 root 0:00 [khungtaskd] 34 root 0:00 [oom_reaper] 35 root 0:00 [writeback] 36 root 3:26 [kcompactd0] 51 root 0:00 [pencrypt_serial] 52 root 0:00 [pdecrypt_serial] 53 root 0:00 [cryptd] 71 root 0:00 [kintegrityd] 72 root 0:00 [kblockd] 73 root 0:00 [blkcg_punt_bio] 74 root 0:00 [tpm_dev_wq] 75 root 0:00 [ata_sff] 76 root 0:00 [md] 77 root 0:00 [watchdogd] 79 root 0:00 [cfg80211] 80 root 0:13 [kworker/1:1H-kb] 83 root 6:11 [kswapd0] 85 root 0:00 [kthrotld] 86 root 0:00 [acpi_thermal_pm] 88 root 0:00 [irq/127-mei_me] 93 root 0:00 [nvme-wq] 94 root 0:00 [nvme-reset-wq] 95 root 0:00 [nvme-delete-wq] 104 root 0:00 [scsi_eh_0] 105 root 0:00 [scsi_tmf_0] 106 root 0:00 [scsi_eh_1] 107 root 0:00 [scsi_tmf_1] 108 root 0:00 [scsi_eh_2] 109 root 0:00 [scsi_tmf_2] 111 root 0:00 [scsi_eh_3] 112 root 0:00 [scsi_tmf_3] 116 root 0:00 [raid5wq] 117 root 0:00 [dm_bufio_cache] 119 root 0:00 [sdhci] 120 root 0:00 [irq/16-mmc0] 121 root 0:00 [mld] 122 root 0:00 [ipv6_addrconf] 123 root 0:18 [kworker/0:1H-kb] 139 root 0:00 [mmc_complete] 144 root 0:14 [kworker/2:1H-kb] 521 root 0:00 [kworker/3:1H] 530 root 0:13 /usr/sbin/nasmand 537 root 0:09 /usr/sbin/netmand 569 root 1:40 /usr/sbin/lcmd 585 root 0:00 /usr/sbin/raidmand -daemon 587 root 0:42 /usr/sbin/stormand -daemon 669 root 0:00 [md0raid1] 676 root 0:00 [jbd2/md0-8] 677 root 0:00 [ext4-rsv-conver] 683 root 0:00 [loop0] 690 root 0:00 [ext4-rsv-conver] 791 root 0:17 /usr/sbin/logmand 795 root 0:01 /usr/builtin/sbin/rsyslogd -f /etc/rsyslog.conf 873 root 0:00 [bond0] 906 root 0:00 [bond1] 968 root 0:00 [bond2] 983 root 0:00 [kworker/3:1-mm] 1358 root 0:00 [bond3] 1776 root 0:00 [md126_raid1] 1879 root 0:00 [btrfs-worker] 1880 root 0:00 [btrfs-worker-hi] 1881 root 0:00 [btrfs-delalloc] 1882 root 0:00 [btrfs-flush_del] 1883 root 0:00 [btrfs-cache] 1884 root 0:00 [btrfs-fixup] 1885 root 0:00 [btrfs-endio] 1886 root 0:00 [btrfs-endio-met] 1887 root 0:00 [btrfs-endio-met] 1888 root 0:00 [btrfs-endio-rai] 1889 root 0:00 [btrfs-rmw] 1890 root 0:00 [btrfs-endio-wri] 1891 root 0:00 [btrfs-freespace] 1892 root 0:00 [btrfs-delayed-m] 1893 root 0:00 [btrfs-readahead] 1894 root 0:00 [btrfs-qgroup-re] 1895 root 0:01 [btrfs-cleaner] 1896 root 5:21 [btrfs-transacti] 1898 root 0:01 /usr/sbin/volmand volume1 1946 root 1:26 [md2_raid5] 1959 root 0:00 [btrfs-worker] 1960 root 0:00 [btrfs-worker-hi] 1961 root 0:00 [btrfs-delalloc] 1962 root 0:00 [btrfs-flush_del] 1963 root 0:00 [btrfs-cache] 1964 root 0:00 [btrfs-fixup] 1965 root 0:00 [btrfs-endio] 1966 root 0:00 [btrfs-endio-met] 1967 root 0:00 [btrfs-endio-met] 1968 root 0:00 [btrfs-endio-rai] 1969 root 0:00 [btrfs-rmw] 1970 root 0:00 [btrfs-endio-wri] 1971 root 0:00 [btrfs-freespace] 1972 root 0:00 [btrfs-delayed-m] 1973 root 0:00 [btrfs-readahead] 1974 root 0:00 [btrfs-qgroup-re] 1989 root 0:00 [scsi_eh_4] 1990 root 0:00 [scsi_tmf_4] 1998 root 0:05 [usb-storage] 2249 root 0:00 [scsi_eh_5] 2250 root 0:00 [scsi_tmf_5] 2258 root 0:05 [usb-storage] 2293 root 0:00 [scsi_eh_6] 2294 root 0:00 [scsi_tmf_6] 2301 root 0:05 [usb-storage] 2335 root 0:00 [scsi_eh_7] 2337 root 0:00 [scsi_tmf_7] 2344 root 0:05 [usb-storage] 2668 root 0:00 nginx: master process /usr/builtin/sbin/nginx -c /usr/buil 2669 root 0:00 nginx: worker process 2670 root 0:00 nginx: worker process 2671 root 0:00 nginx: worker process 2672 root 0:00 nginx: worker process 2711 root 0:00 [btrfs-cleaner] 2712 root 0:16 [btrfs-transacti] 2714 root 0:01 /usr/sbin/volmand volume2 2758 root 0:02 /usr/sbin/crond 2768 root 0:00 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2769 root 0:25 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2770 root 0:23 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2771 root 0:27 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2772 root 0:25 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2773 root 0:45 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2774 root 0:26 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2775 root 0:27 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2776 root 7:27 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2777 root 0:26 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2778 root 0:28 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2779 root 0:29 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2780 root 0:27 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2781 root 0:32 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2782 root 0:43 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2783 root 1:24 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2784 root 3:50 /usr/sbin/lighttpd -f /usr/etc/lighttpd/lighttpd.conf 2796 root 0:00 /usr/builtin/sbin/loginusrmand 2807 root 0:00 sshd: /usr/sbin/sshd [listener] 0 of 10-100 startups 3021 root 0:00 /usr/sbin/hostmand 3037 root 0:29 [sys_stat] 3048 root 0:50 [HddLightThread] 3102 root 0:05 [asspkr_thread] 3126 root 29:56 /usr/sbin/emboardmand 3139 root 0:00 /usr/sbin/crontab_check 3146 root 6:14 /usr/sbin/watchmand 3158 root 6:19 /usr/sbin/guarddogd 3264 root 0:00 [card0-crtc0] 3265 root 0:00 [card0-crtc1] 3266 root 0:00 [card0-crtc2] 3587 root 0:00 tunnel_client 3638 root 0:00 [kworker/0:1-eve] 3903 root 0:00 [iscsi_eh] 3907 root 0:00 [iscsi_conn_clea] 3971 root 0:05 /usr/builtin/sbin/iscsid -c=/usr/builtin/etc/iscsid.conf 3976 root 0:00 /usr/builtin/sbin/iscsid -c=/usr/builtin/etc/iscsid.conf 4010 root 0:00 /usr/builtin/bin/rsyncd --no-detach --daemon --config /usr 4020 root 0:00 /usr/sbin/recybind 4024 root 0:01 /usr/builtin/sbin/recydbmand 4031 root 0:00 /usr/builtin/sbin/smbd -D 4036 root 0:00 {smbd-notifyd} /usr/builtin/sbin/smbd -D 4043 root 0:04 /usr/builtin/sbin/wsdd2 -d -w 4045 root 0:00 {lpqd} /usr/builtin/sbin/smbd -D 4051 root 2:07 /usr/builtin/sbin/procctrld 4031 4056 root 2:14 /usr/builtin/sbin/nmbd -D 4062 root 21:23 /usr/builtin/sbin/winbindd -D -l /var/log/samba 4067 root 8:21 /usr/builtin/sbin/winbindd -D -l /var/log/samba 4068 nobody 0:00 /usr/bin/dbus-daemon --system --fork 4076 nobody 0:08 proftpd: (accepting connections) 4089 root 0:00 /usr/builtin/sbin/winbindd -D -l /var/log/samba 4108 root 0:00 nginx: master process /usr/builtin/sbin/nginx 4109 root 0:00 nginx: worker process 4110 root 0:00 nginx: worker process 4111 root 0:00 nginx: worker process 4112 root 0:00 nginx: worker process 4138 root 0:07 /usr/sbin/ezrouterd -t 300 4144 root 0:01 /usr/builtin/sbin/sftpmand 4149 root 0:00 sshd_sftp: /usr/sbin/sshd_sftp -S -f /usr/builtin/etc/sshd 4155 root 0:13 /usr/builtin/sbin/acloudidd -interval 3600 -check 60 4183 root 0:00 [usbip_event] 4210 avahi 11:50 avahi-daemon: running [SPROCKET.local] 4211 avahi 0:00 avahi-daemon: chroot helper 4220 root 0:00 /usr/builtin/sbin/usbipd -D 4226 root 0:00 /usr/builtin/sbin/cupsd -C /usr/builtin/etc/cups/cupsd.con 4234 root 1:59 /usr/builtin/sbin/p2pmand 4241 root 0:06 /usr/sbin/ftpbackupd 4252 root 0:00 /usr/builtin/sbin/ezsyncd -interval 86400 4273 root 0:02 /usr/builtin/sbin/actmand 4277 root 0:00 [cryptodev_queue] 4289 root 0:12 /usr/builtin/sbin/taskmonitord 4310 root 0:00 nginx: master process nginx -c /usr/builtin/etc/nginx_prox 4345 root 0:00 /usr/builtin/bin/servicemanager 4386 root 0:00 [krfcommd] 4480 root 0:00 /usr/builtin/sbin/bluetoothd 4496 root 0:12 /usr/builtin/sbin/cdmand -start 4514 root 0:00 /usr/builtin/sbin/cifsdrvd 4527 root 0:00 /usr/builtin/bin/thumbnail -daemon 4528 root 9:35 /usr/builtin/sbin/irmand 5195 root 3h52 /usr/local/AppCentral/plexmediaserver/Plex Media Server 5254 root 0:00 [kworker/1:1-rcu] 5267 root 2:21 {Plex Script Hos} Plex Plug-in [com.plexapp.system] /volum 5391 root 10:22 /volume1/.@plugins/AppCentral/plexmediaserver/Plex DLNA Se 5392 root 4:19 /volume1/.@plugins/AppCentral/plexmediaserver/Plex Tuner S 5438 root 2:08 {Plex Script Hos} Plex Plug-in [com.plexapp.plugins.WebToo 5444 root 5:50 {Plex Script Hos} Plex Plug-in [com.plexapp.agents.subzero 5663 root 0:00 /usr/builtin/sbin/rpcbind 5825 root 1:17 [md21_raid5] 5878 root 0:00 [btrfs-worker] 5879 root 0:00 [btrfs-worker-hi] 5880 root 0:00 [btrfs-delalloc] 5881 root 0:00 [btrfs-flush_del] 5882 root 0:00 [btrfs-cache] 5883 root 0:00 [btrfs-fixup] 5884 root 0:00 [btrfs-endio] 5885 root 0:00 [btrfs-endio-met] 5886 root 0:00 [btrfs-endio-met] 5887 root 0:00 [btrfs-endio-rai] 5888 root 0:00 [btrfs-rmw] 5889 root 0:00 [btrfs-endio-wri] 5890 root 0:00 [btrfs-freespace] 5891 root 0:00 [btrfs-delayed-m] 5892 root 0:00 [btrfs-readahead] 5893 root 0:00 [btrfs-qgroup-re] 5896 root 0:00 [btrfs-cleaner] 5897 root 0:02 [btrfs-transacti] 5899 root 0:01 /usr/sbin/volmand volume21 5915 root 0:00 {cleanupd} /usr/builtin/sbin/smbd -D 6120 root 11:54 /usr/local/AppCentral/docker-ce/bin/dockerd --debug --log- 6136 root 4:35 containerd --config /var/run/docker/containerd/containerd. 6208 root 0:00 [scst_release_ac] 6211 root 0:00 [scst_uid] 6227 root 0:00 [scstd0] 6228 root 0:00 [scstd1] 6229 root 0:00 [scstd2] 6230 root 0:00 [scstd3] 6231 root 0:00 [scst_initd] 6232 root 0:00 [scsi_tm] 6233 root 0:00 [scst_mgmtd] 6244 root 0:01 [scst_actid] 6253 root 0:00 [iscsird0_0] 6254 root 0:00 [iscsird0_1] 6255 root 0:00 [iscsird0_2] 6256 root 0:00 [iscsird0_3] 6257 root 0:00 [iscsiwr0_0] 6258 root 0:00 [iscsiwr0_1] 6259 root 0:00 [iscsiwr0_2] 6260 root 0:00 [iscsiwr0_3] 6269 root 0:00 /usr/builtin/sbin/iscsi-scstd -p 3260 6279 root 0:01 [kworker/u8:1-ev] 6734 root 0:01 nginx: worker process 7299 root 0:00 sshd-session: Starminder [priv] 7303 Starmind 0:00 sshd-session: Starminder@notty 7304 Starmind 0:00 sshd-session: Starminder@internal-sftp 7305 root 0:00 sshd-session: Starminder [priv] 7309 Starmind 0:00 sshd-session: Starminder 7315 root 0:00 sshd-session: Starminder [priv] 7334 Starmind 0:00 sshd-session: Starminder@notty 7337 Starmind 0:00 sshd-session: Starminder@internal-sftp 7338 root 0:00 sshd-session: Starminder [priv] 7346 Starmind 0:00 sshd-session: Starminder 7904 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 7925 root 0:45 /volume1/.@plugins/AppCentral/docker-ce/bin/containerd-shi 7955 root 0:00 /package/admin/s6/command/s6-svscan -d4 -- /run/service 8093 root 0:00 s6-supervise s6-linux-init-shutdownd 8095 root 0:00 /package/admin/s6-linux-init/command/s6-linux-init-shutdow 8128 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 8136 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 8143 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 8181 root 0:00 s6-supervise s6rc-oneshot-runner 8182 root 0:00 s6-supervise s6rc-fdholder 8184 root 0:00 s6-supervise svc-cron 8185 root 0:00 s6-supervise svc-tautulli 8207 root 0:00 /package/admin/s6/command/s6-ipcserverd -1 -- /package/adm 8322 root 0:43 /volume1/.@plugins/AppCentral/docker-ce/bin/containerd-shi 8369 root 1:19 /portainer --sslcert /certs/ssl.pem --sslkey /certs/ssl.pe 8396 root 0:00 [kworker/u8:2-bt] 8425 root 0:00 busybox crond -f -S -l 5 8426 admin 4:51 python3 /app/tautulli/Tautulli.py --datadir /config 10885 root 0:03 [kworker/u8:5-bt] 11700 root 0:00 init 12639 root 2:49 orbwebM2Md 12765 root 0:00 sshd-session: Starminder [priv] 12769 Starmind 0:00 sshd-session: Starminder@notty 12770 Starmind 0:00 sshd-session: Starminder@internal-sftp 12771 root 0:00 sshd-session: Starminder [priv] 12777 Starmind 0:00 sshd-session: Starminder 14370 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 14393 root 0:10 /volume1/.@plugins/AppCentral/docker-ce/bin/containerd-shi 14421 root 0:00 /package/admin/s6/command/s6-svscan -d4 -- /run/service 14471 root 0:00 s6-supervise s6-linux-init-shutdownd 14473 root 0:00 /package/admin/s6-linux-init/command/s6-linux-init-shutdow 14493 root 0:00 s6-supervise s6rc-oneshot-runner 14494 root 0:00 s6-supervise s6rc-fdholder 14495 root 0:00 s6-supervise svc-cron 14496 root 0:00 s6-supervise svc-ombi 14504 root 0:00 /package/admin/s6/command/s6-ipcserverd -1 -- /package/adm 14622 admin 30:36 /app/ombi/Ombi --storage /config --host http://*:3579 14624 root 0:00 bash ./run svc-cron 14634 root 0:00 sleep infinity 15833 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 15855 root 0:17 /volume1/.@plugins/AppCentral/docker-ce/bin/containerd-shi 15880 root 0:00 /package/admin/s6/command/s6-svscan -d4 -- /run/service 15925 root 0:00 s6-supervise s6-linux-init-shutdownd 15927 root 0:00 /package/admin/s6-linux-init/command/s6-linux-init-shutdow 15947 root 0:00 s6-supervise s6rc-oneshot-runner 15948 root 0:00 s6-supervise s6rc-fdholder 15949 root 0:00 s6-supervise svc-cron 15950 root 0:00 s6-supervise svc-overseerr 15958 root 0:00 /package/admin/s6/command/s6-ipcserverd -1 -- /package/adm 16063 admin 0:25 node /usr/share/nodemodules/yarn/bin/yarn.js start 16066 root 0:00 busybox crond -f -S -l 5 16103 admin 6:18 /usr/bin/node dist/index.js 16725 root 2:00 /usr/local/AppCentral/clamav/bin/clamavctl 17119 root 0:02 [kworker/u8:6-bt] 19846 root 0:00 [kworker/u8:8-ev] 20021 root 6h50 /usr/builtin/sbin/smbd -D 22335 root 0:00 [kworker/2:2-eve] 22362 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 22382 root 0:11 /volume1/.@plugins/AppCentral/docker-ce/bin/containerd-shi 22412 root 3:07 node index.js 23690 root 0:00 [kworker/0:2-rcu] 24464 root 0:00 [kworker/u8:0-bt] 24616 root 0:00 [kworker/u9:5-bt] 24617 root 0:00 [kworker/u9:6-bt] 25096 root 0:00 [kworker/1:0-mm] 25162 root 0:00 [kworker/u8:4-ev] 25614 root 0:00 [kworker/3:0-eve] 26417 root 0:00 [kworker/2:1-eve] 28561 root 0:00 sshd-session: Starminder [priv] 28672 Starmind 0:00 sshd-session: Starminder@pts/0 28675 Starmind 0:00 -sh 29155 root 0:00 [kworker/0:0-mm] 29171 root 0:00 [kworker/1:2] 29216 root 0:00 [kworker/u8:3-bt] 29339 root 2h01 {EasyAudioEncode} Plex EAE Service 29653 root 0:00 /usr/local/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -h 29674 root 0:00 /volume1/.@plugins/AppCentral/docker-ce/bin/containerd-shi 29701 Starmind 0:34 {uvicorn} /usr/local/bin/python /usr/local/bin/uvicorn bac 29965 root 0:00 /usr/builtin/sbin/winbindd -D -l /var/log/samba 30541 root 0:00 [kworker/2:0-mm] 30569 root 0:00 [kworker/3:2] 31231 root 0:00 /usr/webman/portal/apis/events.cgi

nandyalu commented 1 week ago

This is from your system. Can you log in to Trailarr container and run this command?

Starminder commented 1 week ago

This machine uses Portainer as a frontend to docker. On a different machine, I can exec into the container through docker and it works like I expect. On this machine when I exec into this container using Portainer ps doesn't even work. ls shows me in my videos folder.

Are my volumes messed up in compose? Could explain a few things. Trailarr is running locally and movies are in /Videos and Shows are in /Recorded TV

nandyalu commented 5 days ago

Looks like ps command is not available within the container, so instead use docker top trailarr from your machine terminal.

Are my volumes messed up in compose? Could explain a few things. Trailarr is running locally and movies are in /Videos and Shows are in /Recorded TV

It is possible that your volumes might be incorrect as there is a space in /Recorded TV try changing that to /"Recorded TV"

Starminder commented 5 days ago

I changed the TV path in the stack.

Here is top:

root@SPROCKET:/volume1/home/Starminder # docker top trailarr PID USER TIME COMMAND 12751 Starmind 9:59 {uvicorn} /usr/local/bin/python /usr/local/bin/uvicorn backend.main:trailarr_api --host 0.0.0.0 --port 7889 21182 Starmind 2:06 /usr/local/bin/ffmpeg -y -loglevel repeat+info -i file:/tmp/14-trailer.mkv -map 0 -dn -ignore_unknown -c copy -movflags +faststart -c:v libx264 -preset veryfast -crf 22 -c:a aac -b:a 128k -movflags +faststart -tune zerolatency file:/tmp/14-trailer.temp.mkv

Starminder commented 5 days ago

I had to add connections to Radarr/Sonarr again.

So far: Statistics Movies 4121 Movies Monitored 4121 Series 1543 Series Monitored 1543 Trailers downloaded 0

Starminder commented 5 days ago

image

nandyalu commented 4 days ago

Can you post a screenshot of any media details page showing the files section?

Starminder commented 3 days ago

Like this? image

nandyalu commented 3 days ago

Are you on Windows?

Starminder commented 3 days ago

Yes, for the client/browser. The server is an Asustor NAS.


From: Uma Nandyala @.> Sent: Wednesday, September 11, 2024 4:23 PM To: nandyalu/trailarr @.> Cc: rainman gokatgo.net @.>; Author @.> Subject: Re: [nandyalu/trailarr] [Bug]Six downloads in five hours (Issue #36)

Are you on Windows?

— Reply to this email directly, view it on GitHubhttps://github.com/nandyalu/trailarr/issues/36#issuecomment-2344632439, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AFOTUSTPYNXHVLAMUXCSGCDZWCRCZAVCNFSM6AAAAABNPE5AOSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBUGYZTENBTHE. You are receiving this because you authored the thread.Message ID: @.***>

nandyalu commented 3 days ago

Did you map your media folders to container properly?

Starminder commented 3 days ago

image

        - /Videos:/media/movies
        - /"Recorded TV":/media/tv

Here's how it is going: Logs Traillarr logs.txt