Open nathang21 opened 4 months ago
Seems like this issue has gone away. Perhaps my lidarr instance was just overloaded scanning all the new media that was added.
Reopening, looks like this is an intermittent issue.
Hi! It's got a 60 second timeout which is pretty reasonable. Seem like you have intermittent issues so it's likely something in your setup rather than on an application level.
How many artists and albums do you have synced in there? Above 1000? 10000?
Are you running lidarr or other arrs on the NAS? If you do, is any of the config/data dirs for the arrs on any storage that is NFS mounted? The arrs with the default SQLite database does not work will with that type of setup and can lead to issues like this.
Thanks for the reply, I think I must just be barely reaching that 60 second timeout, depending on the available CPU/SSD usage on my system and how fast lidarr is able to respond (slower especially of other scheduled tasks in Lidarr are running). All my *arrs are setup the same way, only lidarr has this issue, but it only intermittently succeeds so on average it's taking longer than 60 seconds I suspect.
Here is some metadata from my library, I have a lot of monitored artists >1000 (due to shopify sync) but don't currently have a large media library on disk.
Artists: 1639
Inactive: 91
Continuing: 1548
Monitored: 25
Unmonitored: 1614
Tracks: 2503
Files: 2491
Total File Size: 20.5 GiB
Regarding the config/data dirs, I am not using any NFS mounted drive. I have local HDD's for all the media, and docker and all the configuration lives on the SSD. For media/downloads hardlinks are working.
Any other ideas besides increasing the timeout?
I've recently started getting these errors with Lidar (which has recently increased in size significantly, not in media files but in synced artists from Shopify integration), and I haven't been able to update my filters in a while (as far as I can tell from the logs). All the other *arrs are not having this issue (which
I don't feel like my library is particularly large, but assuming there isn't some other bug, perhaps the query could be optimized or the deadline timeout increased? Let me know if you need other information.
Logs:
Environment Details: