Closed ASK-ME-ABOUT-LOOM closed 11 months ago
My docker-compose.yml, if it is helpful:
---
version: '3.3'
services:
wrapperr:
environment:
- TZ=America/New_York
container_name: wrapperr
image: aunefyren/wrapperr:latest
restart: always
volumes:
- './config:/app/config'
- "/etc/localtime:/etc/localtime:ro"
labels:
- com.centurylinklabs.watchtower.enable=true
- autoheal=true
networks:
- backend
healthcheck:
test: ['CMD-SHELL', 'nc -z localhost 8282 || exit 1']
interval: 2s
timeout: 1s
retries: 10
networks:
backend:
external: true
Hey, thanks for submitting an issue. This is a new one.
Do you get the same result if you try the cache function on the admin page?
I disabled the autoheal container in case it was being too aggressive with restarting the container. The behavior I'm now seeing is that I hit the unwrap button and the container starts downloading days from tautulli. At some point in the process, the web UI displays the "API response can't be parsed" message, but the container continues downloading days. When it gets to the end, I see:
wrapperr | 2023/12/12 07:31:06 Downloading day: 2023-12-10 from server 'tautulli'.
wrapperr | 2023/12/12 07:31:06 Downloading day: 2023-12-11 from server 'tautulli'.
wrapperr | 2023/12/12 07:31:07 Tautulli refresh/download stage completed.
wrapperr | 2023/12/12 07:31:07 Show buddy retrieved.
wrapperr | [GIN] 2023/12/12 - 07:31:07 | 200 | 1m21s | 127.0.0.1 | POST "/api/get/statistics"
wrapperr | Error #01: write tcp 172.21.0.22:8282->172.21.0.21:41370: write: broken pipe
I've had the cache function enabled this entire time. I tried disabling it and get the exact same results I just described with it on or off.
No, I mean the Caching
page. It should be visible on the main menu of the admin page.
Ah, I see. Well, trying to cache the tautulli data results in the container downloading from 2022-12-31 until 2023-02-18, at which point it loops and does the same thing over again endlessly. I had to kill the container to stop it.
Huh, very odd. Does the cache.json
file get any content? Could it be a permissions issue?
I just changed it to 365 days and tried multiple runs. Somewhere around 10 - 14 September (but consistently the same date) it pops up a javascript "Failed to parse API response" message.
I'm starting to wonder if the data in my tautulli DB is borked on that day. Earlier this year (and critically, in the same timeframe as the API response failure) I upgraded my entire setup, and at one point a bunch of content accidentally got removed from the library due to a typo on one of my volume mounts. I'm wondering if that one day somehow resulted in some bad DB data.
Forgot to add, the cache.json
file exists and appears to have the right permissions, but only contains []
.
¯\_(ツ)_/¯
The code which communicates with the Tautulli API is pretty old, so perhaps a bad response (error) from there crashes it? I'll take a look.
I just tried narrowing down the failure date, but this time it threw the "failed to parse API response" error in August rather than September. It keeps going on the container side, though, but nothing ever gets written to cache.json
and it loops forever.
defer res.Body.Close()
body, err := io.ReadAll(res.Body)
var body_reply models.TautulliGetHistoryReply
json.Unmarshal(body, &body_reply)
if err != nil {
log.Println(err)
return []models.TautulliHistoryItem{}, errors.New("Failed to parse Tautulli response.")
}
Well, this code is horrible and needs patching. If your Tautulli API gives an error it would probably crash.
Can you test with the image aunefyren/wrapperr:beta
?
Well this explains some of it. It doesn't appear to be able to parse any records from tautulli. Every record reports an error like:
Failed to parse Tautulli response. Error: json: cannot unmarshal string into Go struct field TautulliHistoryItem.response.data.data.parent_rating_key of type int
I've attached all 365 days of logs here: wrapperr.log
Try the new beta image again, should print the returned body.
Where should I be seeing the new output in specific? I just did a run using the cache tool on the admin page and the output looks exactly the same as the logs I linked.
Should be both in the log and the console, but it only prints when the HTTP status code is not 200. Guess it is. I'll create another image.
It's out.
Logs are attached in this self-expiring link: https://send.bitwarden.com/#Y0UM2wvHHkuwsbDWAPXEjQ/92KbFloQNdT7SaXGmwGceA
.log.bz2 file?
Yep. You should be able to decompress using bunzip2
from a console, or with 7-Zip if you're using Windows.
Please send a .log
file. You can cut out any unwanted data or just post a snipped here.
My apologies, I was trying to be helpful by compressing it :(
New link: https://send.bitwarden.com/#hx5YZm6bDEaaeLDWAPj4LA/SPJhOYH6tlrpNdFKD313Jg
No worries, just being cautious.
Try the beta image now, should parse the data better.
I paged through the output side by side with the previous caching attempt and it appears to be exactly the same. Should I be seeing a difference in the logs?
Mmm, yes. Still saying
cannot unmarshal string into Go struct field TautulliHistoryItem.response.data.data.year of type int
or
I mean the output is literally exactly the same, line for line (with only the timestamp changed, of course)
Try again now :S
Ok, getting zero errors from the output, but still got the JS popup indicating API failure and it loops trying to cache from tautulli.
Interesting, anything in the cache.json
? Nothing the log?
Nope, nothing. It resembles the log/output that it was producing before you pushed the beta, actually. If you've got a command I can run to confirm the release/version (or even producing a hash of a specific code file), I'm happy to run it.
Could see open the networks overview in your console within your browser and verify what the API responds with? HTTP response code and response body.
The JSON in the network header returns:
{
"wrapperr_version": "v3.2.2",
"application_name": "Wrapperr",
"plex_auth": true,
"wrapperr_front_page_title": "Did you get that thing from Spotify and wondered what your Plex statistics looked like?",
"wrapperr_front_page_subtitle": "Well, have a look...",
"wrapperr_root": "",
"client_key": "redacted",
"wrapperr_configured": true,
"winter_theme": true,
"basic_auth": false,
"message": "Retrieved Wrapperr version.",
"error": false
}
That seems like it parsed. What network reply is an error/unparseable?
It appears to throw an HTTP 504 on 09-Sept-2023 with consistency. It's coming from nginx, but I'm not sure if that's coming from wrapperr's webserver, tautulli's webserver, or from the nginx reverse proxy I have in front of everything.
I just installed Wrapperr today and I was having the same exact problem, and it's running behind nginx, I was trying to cache 365 days' worth of data. When I started the cache bypassing nginx, the "API response can't be parsed" error stopped happening. So it looks like nginx, is causing an issue, but not sure why.
I think the big wait time is causing the proxy to time out. You could either decrease the days between saving while caching:
or change the timeout setting in the Nginx config:
proxy_read_timeout 600s;
I can confirm this fixes the API connection errors! Unfortunately I'm still not getting anything written to cache.json . I successfully ran a cache save of 365 days, both directly from the container on TCP 8282 and through the nginx proxy with the additional read timeout setting. It just doesn't write to cache.json . It seems to think it did: |
Time | Message |
---|---|---|
07:19:35 | Creating new cache request. Maximum 365 days. | |
07:20:56 | Completed caching with a maximum of 365 days. | |
07:20:56 | Finished caching request. |
I'm certain the container is able to write to the file because I shelled in and did it:
cobb@scummbar:/opt/svc/wrapperr/config$ ls -al
total 46336
drwxrwsr-x 3 cobb root 4096 Dec 12 08:08 .
drwxrwsr-x 3 cobb root 4096 Dec 13 07:10 ..
-rw-r--r-- 1 root root 119 Dec 9 2022 admin.json
-rw-r--r-- 1 root root 3 Dec 12 08:08 cache.json
-rw-r--r-- 1 root root 7552 Dec 13 07:19 config.json
drwxr-sr-x 2 root root 4096 Dec 12 07:05 links
-rw-r--r-- 1 root root 9904 Dec 12 07:31 users.json
-rw-r--r-- 1 root root 47401867 Dec 13 07:20 wrapperr.log
cobb@scummbar:/opt/svc/wrapperr/config$ cat cache.json
[]
cobb@scummbar:/opt/svc/wrapperr/config$ cd ..
cobb@scummbar:/opt/svc/wrapperr$ docker compose exec wrapperr /bin/bash
root@d6b4a7ef171d:/app# cd config
root@d6b4a7ef171d:/app/config# ls -al
total 46336
drwxrwsr-x 3 1001 root 4096 Dec 12 08:08 .
drwxr-xr-x 1 root root 4096 Dec 13 06:54 ..
-rw-r--r-- 1 root root 119 Dec 9 2022 admin.json
-rw-r--r-- 1 root root 3 Dec 12 08:08 cache.json
-rw-r--r-- 1 root root 7552 Dec 13 07:19 config.json
drwxr-sr-x 2 root root 4096 Dec 12 07:05 links
-rw-r--r-- 1 root root 9904 Dec 12 07:31 users.json
-rw-r--r-- 1 root root 47401867 Dec 13 07:20 wrapperr.log
root@d6b4a7ef171d:/app/config# echo '[response]' > cache.json
root@d6b4a7ef171d:/app/config# cat cache.json
[response]
root@d6b4a7ef171d:/app/config# exit
exit
cobb@scummbar:/opt/svc/wrapperr$ cd config/
cobb@scummbar:/opt/svc/wrapperr/config$ ls -al
total 46336
drwxrwsr-x 3 cobb root 4096 Dec 13 07:27 .
drwxrwsr-x 3 cobb root 4096 Dec 13 07:10 ..
-rw-r--r-- 1 root root 119 Dec 9 2022 admin.json
-rw-r--r-- 1 root root 11 Dec 13 07:27 cache.json
-rw-r--r-- 1 root root 7552 Dec 13 07:19 config.json
drwxr-sr-x 2 root root 4096 Dec 12 07:05 links
-rw-r--r-- 1 root root 9904 Dec 12 07:31 users.json
-rw-r--r-- 1 root root 47401867 Dec 13 07:20 wrapperr.log
cobb@scummbar:/opt/svc/wrapperr/config$ cat cache.json
[response]
Mmmm, good you verified the file is writeable. And Cache results for later use
is enabled on Wrapperr settings
?
I can confirm that was the issue. I was certain I'd enabled that the other day when I updated the date for this year, but I must not have.
It might be worth ignoring that setting for caches produced on /admin/caching. I don't ever see a circumstance where a user goes to that page, clicks the Cache
button, nothing gets saved, and they think, "this is what I expected to happen."
At minimum I'd expect the application to tell me that it won't save the results and to disable the button until I tick the box.
Ultimately this has fixed all of my issues. Thanks a million for sticking with me through all of this :) Feel free to close when you wish.
Have a wonderful holiday!
Ahh, glad we figured this out.
Before some major refactoring it did warn/stop you in this use case previously. Not sure what happened to that. I'll make sure to add a warning.
Thank you for your help!
Fixed in v3.2.3.
When updating wrapper to run this year (I'd love to have a box that says "always get the last 12 months," I sent out a link to it and forgot entirely I'd have to go into the admin panel and update the timeframe) I see that it downloads a number of days worth of tautulli data, at which point the container crashes and the main page displays "API response can't be parsed."
I can't otherwise find any useful logs. It seems able to talk to tautulli with no issue. Sometimes it downloads one day's worth of logs and then crashes. Other times it downloads 8 or 9 months of days and then crashes. Example output:
There are no other meaningful logs I can find. Tautulli isn't reporting anything odd on its side, either.