Open gardiol opened 11 hours ago
Weird, the actual timeout here is from map_task_to_instance(task)
which is a single indexed database lookup. This is normally a fast lookup. It is nested, so this isn't the fastest if you have a thousand+ tasks to look up, but it still shouldn't take over the worker timeout to execute.
Try resetting your tasks to start with:
https://github.com/meeb/tubesync/blob/main/docs/reset-tasks.md
Resetting the tasks indeed helped a lot, now tasks have somehow restarted (at least indexing is running)... Will check in a bit if it keeps doing the same...
Can say that while immediately the tasks page loaded fine, after a couple of minuties i get the 502 random errors again.
Providing your number of tasks is going down at some point it'll start working. The /tasks
page just lists tasks really it doesn't do a lot. Just curious, but what platform are you running this on? And what database are you using?
I am on Gentoo Linux (i7 9th gen, 48gb RAM) CLI only system shared with other services indeed, but as i said, very low load.
I am only running the docker compose. This is the setup i am using
Looks fine, is your /config
volume on a reasonably fast disk?
It's on a Linux software RAID1, but it's on an external USB JBOD, so not the fastest atually... Might try to move it to an internal SSD if that would help significantly?
Yeah, querying an SQLite database running on a USB mounted hard drive is going to be slow.
Ok, moved, will let you know.
I have been hitting this issue since a long time.
tubesync web gui works fine 1 out of 5 times, the other 4 times instead it wait for some 30 seconds, then bails out with an "internal server error".
My setup is TubeSync containerized with my NGINX reverse proxy on front, directly to the TubeSync (port 4848).
Container log is:
It seems that when the tasks page loads, it's always downloading a specific video from a specific channel. This video has been downloading for days now.
Restarting the container will temporarily fix the issue, but soon it's back.
On container logs, i have nothing else than what copied above.
It's not a resource issue, as both the container and the host are not overloaded, RAM is plentiful and so other resources. There is load at all.
the NGINX in front shows no issues at all, no errors., All reuqests are passed down to TubeSync and the timeout come from Tubesync itself, it seems.
Edit: i have 1284 tasks queued, i can see tasks queued since weeks there...