Closed sammcj closed 1 year ago
Python3 isn't the issue it's a Vue problem. It's being worked on.
@OmgImAlexis we've already have done a lot of tests. And we've eliminated already a lot also.
One of the most alarming issues, is that loading his server:8081/home
request took 5s. So just loading the network resource. So no js is running at that time.
On my machine (a 10y old 1.6 ghz atom, running at 100% cpu) it takes 200ms. I've looked at his chrome perf report. And allot of the static asset loading takes almost 10x as long compared to mine also.
Event with an empty main.db, it's still taking 2.5s for the /home call.
@sammcj could you do the following.
In config -> general -> interface -> Web interface, enable HTTP logs. and restart.
Now reload home and check in chrome dev tools, the network tab. Click the /home call. And create a screenshot of the timing
tab.
Now in your application.log, find the TORNADO logs for that request. Mine look like:
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 302 GET /medusa/ (127.0.0.1) 6.39ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/home/ (127.0.0.1) 7.80ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/vendors.css?31419 (127.0.0.1) 5.90ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/vender.min.css?31419 (127.0.0.1) 3.41ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/themed.css?31419 (127.0.0.1) 3.12ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/bootstrap-formhelpers.min.css?31419 (127.0.0.1) 3.38ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/browser.css?31419 (127.0.0.1) 3.11ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/lib/jquery-ui-1.10.4.custom.min.css?31419 (127.0.0.1) 3.15ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/lib/jquery.qtip-2.2.1.min.css?31419 (127.0.0.1) 2.99ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/style.css?31419 (127.0.0.1) 2.98ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/print.css?31419 (127.0.0.1) 2.77ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/css/country-flags.css?31419 (127.0.0.1) 3.04ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/js/vendors.js?31419 (127.0.0.1) 8.14ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/js/vendors~date-fns.js?31419 (127.0.0.1) 2.94ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/js/medusa-runtime.js?31419 (127.0.0.1) 5.38ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/js/index.js?31419 (127.0.0.1) 3.17ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 304 GET /medusa/js/app.js?31419 (127.0.0.1) 2.68ms
2020-09-28 08:59:04 INFO TORNADO :: [b352bb6] 101 GET /medusa/ws/ui (127.0.0.1) 2.26ms
We're running multiple threads right...?
This kinda points towards a single thread holding all other requests. Otherwise if it's not that then something on the page is preventing anything else from loading until the series endpoint is finished. Are we maybe using await somewhere when .then should be used instead?
@OmgImAlexis good point. I think we are. But that's kinda a different issue. I'll need to make sure there's not an issue with tornado serving out requests before I hop onto the next. But I appreciate your help!
@OmgImAlexis I know you have a high spec system. How is your performance? Like until the first draw?
Interestingly while this is happening I checked my unraid overview and CPU only goes up 10% and RAM barely moves.
Running a Xeon L5640 @ 2.27GHz so 12 threads.
The whole page is locked until the first draw which happens around 12s.
@p0psicles I loaded the page a few times, it randomly seems slightly faster this morning loading in around 4.5-5.5s, but not doubt it'll get slower randomly again.
I've sent the profile to you via discord.
@OmgImAlexis, like you I don't notice any excess CPU utilisation server side either, to me it feels like inefficient javascript around the payload containing the shows.
@OmgImAlexis regarding threads - if Medusa is using threads - it doesn't seem to do a good job of it as if I do perform a CPU heavy task such as bulk updating, scanning media etc... Python smashes 1 CPU core/thread and does nothing with the other 7.
Chrome network request timings as requested by @p0psicles, note the page load times are up to around 30s randomly if I use Chrome rather than Firefox (which is still around 6s) right now:
Switched to develop branch.
Here's my unit script:
[Unit]
Description=Medusa Daemon
After=network.target
[Service]
User=apps
Group=apps
Type=simple
ExecStart=/usr/bin/python3.6 /opt/medusa/start.py -q --nolaunch --datadir=/opt/medusa
TimeoutStopSec=25
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
ps:
~ ps -ef | egrep -i medusa
apps 24049 1 0 09:46 ? 00:02:42 /usr/bin/python3.6 /opt/medusa/start.py -q --nolaunch --datadir=/opt/medusa
htop:
You’re not taking into account that chrome tends to skip caches when the network tab is open. That’s likely why you’re seeing a higher time.
Have you tried the latest develop commit?
True, yeah I don't usually use chrome, I'm always in Firefox or Safari (mobile).
I switched to develop yesterday afternoon, it does actually seem a little quicker at loading pages in general, I'll need to do some more timing tests on /home in FF and chrome today.
Develop on FF 81:
I managed to get it to load in 1.5s on Safari (iOS, iPhone 11 Pro) but that was by unticking half the columns.
I'm also now noticing more 403s on the websockets connection in FF81:
What is interesting is that the develop branch seems a lot faster on Chrome now (3.27s):
I've done some screen recordings of Medusa reloading the /home page in both Firefox and Chrome, this is running off develop and with cache disabled. I've recorded both the network tab during a load and done a performance profile as well.
These are 4k so the text and graphs should be readable (if youtube doesn't over-compress them).
Firefox:
Chrome:
Did you ever manage to find a solution for this?
When opening my home page it takes around 23 seconds and this is internal over 1 Gbit network.
One of the biggest drains seems to be loading the poster images for all shows as they are retrieved in full size for each show. I'm using the Small Poster layout
No, several friends had the same issue and in the end we all moved to sonarr.
On 2 Nov 2021, at 18:57, Rouzax @.***> wrote:
Did you ever manage to find a solution for this?
When opening my home page it takes around 23 seconds and this is internal over 1 Gbit network.
One of the biggest drains seems to be loading the poster images for all shows as they are retrieved in full size for each show. I'm using the Small Poster layout
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
@Rouzax that should only be an issue the first time. Also it's lazy loading. So it should only load the posters that are visible on screen. Although i'm not 100% sure about that.
But as said, it should get the poster images from browser cache the next time you open the page. Btw numberous of improvements have been made to try to improve the load speed of the home layout since this issue was openend.
Like we added localStorage caching. Transitioned everything to vue, and a few others. But the issue remains the large amount of vue components it has to create when showing a lot of shows. I can't do much about that.
Maybe @OmgImAlexis can take a look and consult on what can be improved.
@p0psicles it does not appear to use the cached images. Maybe it is an idea to generate and store a thumbnail of the posters in Medusa, this could speed it up.
It does do lazy loading, but still seems to grab the images each time. Medusa is running on hardware that is powerful enough and have the CPU throttling disabled.
CPU
Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz
Base speed: 2,00 GHz
Sockets: 1
Virtual processors: 12
Virtual machine: Yes
L1 cache: N/A
Utilization 7%
Speed 2,00 GHz
Up time 0:01:06:20
Processes 132
Threads 1632
Handles 44850
Memory
16,0 GB
Slots used: N/A
Hardware reserved: 8,0 GB
Maximum memory: 16,0 GB
Available 4,6 GB
Cached 630 MB
Committed 3,4/10,4 GB
Paged pool 124 MB
Non-paged pool 119 MB
In use (Compressed) 3,3 GB (0 MB)
There is a reverse NGINX proxy in between, but that is also running on the same virtualized hardware.
Here is a screen recording from Firefox from my home page. Did a refresh twice. https://user-images.githubusercontent.com/4103090/139833836-38e4c268-1d4d-4cd5-86f7-8befc247bc69.mp4
EDIT: Second screen recording directly to Medusa without Reverse Proxy https://user-images.githubusercontent.com/4103090/139835052-9698d3d9-d5dc-43ff-bd45-ddc80ff91c2b.mp4
@Rouzax I don't know if it is possible for you, but could you try running Medusa without the reverse proxy? I have the feeling that the going through the proxy is what is causing this.
@medariox Was already in the process of capturing that 😄 and it is just as fast/slow Have updated the original post
When looking at the image cache in Medusa, There is a thumbnail folder but most times it just holds the same image size
EDIT: As a test took all 234 poster images in the thumbnails folder and resized them to max 1024 pixels on the longest side and saved them as jpg. Went from 135MB to 28MB in that folder It does speed up the loading of the posters themselves see the result screen record
https://user-images.githubusercontent.com/4103090/139838730-284dc3a3-6e38-45cb-8db6-a7e7844a2f53.mp4 After wards resized them again and now to 640px and optimized jpg reduced the total size to 12.5MB and that sped up the loading of posters again
EDIT2: Also getting a lot of 404s on images that are not existing but each one will add additonal time
Last I looked at this it was the JS creating tables inside tables that really slowed things now. 🤷
@Rouzax but can you check the browser devtools network tab to see if it's getting them from cache?
It's downloading them again.
Downloading them again because I have CleanURLs running which blocks etags.
https://gitlab.com/KevinRoebert/ClearUrls
Have disabled that now, but does not really speed it up
See for me it's not. It's getting them from cache
Not for me, but the loading of the images after I resized them is not the majority of the time any more.
EDIT: Chrome does use the cache but Firefox doesn't
Response Header on an image from Firefox
HTTP/2 200 OK
server: nginx
date: Tue, 02 Nov 2021 12:05:32 GMT
content-type: image/jpeg
content-length: 49941
x-medusa-server: 0.5.19
access-control-allow-origin: *
access-control-allow-headers: Origin, Accept, Authorization, Content-Type, X-Requested-With, X-CSRF-Token, X-Api-Key, X-Medusa-Server
access-control-allow-methods: OPTIONS, GET
cache-control: max-age=86400
etag: "caee7b613b6c9e0c3588c6bd57ea7630a568f2eb"
vary: Accept-Encoding
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
strict-transport-security: max-age=31536000; includeSubDomains; preload
X-Firefox-Spdy: h2
Repsonse from Chrome
access-control-allow-headers: Origin, Accept, Authorization, Content-Type, X-Requested-With, X-CSRF-Token, X-Api-Key, X-Medusa-Server
access-control-allow-methods: OPTIONS, GET
access-control-allow-origin: *
cache-control: max-age=86400
content-length: 57620
content-type: image/jpeg
date: Tue, 02 Nov 2021 12:03:37 GMT
etag: "37fd39d958ec8a900ea771c9f96aea49085d507c"
referrer-policy: no-referrer-when-downgrade
server: nginx
vary: Accept-Encoding
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-medusa-server: 0.5.19
x-xss-protection: 1; mode=block
You'r nginx reverse proxy is doing something to the header.
HTTP/2 200 OK
server: nginx
I bet that when using FF without the reverse proxy, it's showing the cache control header.
No that is not the case
Response header for Firefox without NGINX
HTTP/1.1 200 OK
Server: TornadoServer/6.1
Content-Type: image/jpeg
Date: Tue, 02 Nov 2021 12:15:37 GMT
X-Medusa-Server: 0.5.19
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Origin, Accept, Authorization, Content-Type, X-Requested-With, X-CSRF-Token, X-Api-Key, X-Medusa-Server
Access-Control-Allow-Methods: OPTIONS, GET
Cache-Control: max-age=86400
Etag: "b7beb7b4c1af68de2a5841c3bd819d55e5a70cdb"
Content-Length: 50356
Vary: Accept-Encoding
After hitting refresh a couple of times it does get some out of cache and others still not, this is still without NGINX in between
One that was not cached.
HTTP/1.1 200 OK
Server: TornadoServer/6.1
Content-Type: image/png
Date: Tue, 02 Nov 2021 12:25:04 GMT
X-Medusa-Server: 0.5.19
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Origin, Accept, Authorization, Content-Type, X-Requested-With, X-CSRF-Token, X-Api-Key, X-Medusa-Server
Access-Control-Allow-Methods: OPTIONS, GET
Cache-Control: max-age=86400
Etag: "790b4feef75bfb89a435a5f0ff7614ed6e814d0d"
Content-Length: 2567
Vary: Accept-Encoding
One that was.
HTTP/1.1 304 Not Modified
Server: TornadoServer/6.1
Date: Tue, 02 Nov 2021 12:25:04 GMT
X-Medusa-Server: 0.5.19
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Origin, Accept, Authorization, Content-Type, X-Requested-With, X-CSRF-Token, X-Api-Key, X-Medusa-Server
Access-Control-Allow-Methods: OPTIONS, GET
Cache-Control: max-age=86400
Etag: "aab54aaf04f285639f42f01217301d668b04ce47"
Vary: Accept-Encoding
so yeah, that proves. it. FF is caching (or racing it) see: https://support.mozilla.org/en-US/questions/1267945
Chrome is also caching it. I would recommend to fix your reverse proxy for better performance.
But even after cutting out the reverse proxy it is still slow in Chrome and Firefox raced means that it tried to read it from cache but it was faster from the actual web server (this happens with and without NGINX in between with me).
I'll go through this during this week, there's quite a lot we can fix based on what I've learnt over the last year. We should be able to get this back to 60fps and less than 2s of load time even on really large servers.
But the issue remains the large amount of vue components it has to create when showing a lot of shows. I can't do much about that.
Yes we can. 😉 Learnt a few new tricks.
Already sped up my install by manually resizing the images in: cache\images\tmdb\thumbnails
but Medusa keeps replacing them with the larger ones.
It doesn't really make sense to store the thumbnails as 2000x3000.
Also doesn't help that my show list contains 217 shows 😄
I'll check that out. It shouldn't get thumbnails that big. I think the only way to overwrite the numbnails is by having them with your show folder.
I've looked around, but the biggest poster thumbnails in the GUI are 180x270
It would make sense to scale all the thumbnails to that.
My cache\images\tmdb\thumbnails
starts as 144MB and after resizing all to max 180x270 it shrinks to 3.9MB
For the files in cache\images\tmdb
it would make sense to stay large as those can be used to open the large poster after clicking on the thumbnail.
In my case I indeed have the poster files with the shows, I leverage the Artwork Dump that will export all the artwork in Kodi to the filesystem. My shows will look like this
Been having a few issues with my computer but hoping to get to this over the next 2 weeks.
I've got it all installed and have my production DB loaded with 885 shows. Let's see what we can do now. :)
Been having a few issues with my computer but hoping to get to this over the next 2 weeks.
I've got it all installed and have my production DB loaded with 885 shows. Let's see what we can do now. :)
😃 There always somebody with a bigger library Let me know if there is anything you want me to test.
So I've tracked down the issue and it's todo with page rendering of the individual items. Network calls were 100% fine even with my large library. Mako, etc. wasn't an issue either.
Curiously @Rouzax how's the loading times on banner vs poster? I'm going to try and tackle banner first and then work on the others.
Will find some time today to look at that.
Have to do some more testing with banner, but I do notice there is a difference in loading of the banners. With poster it loads them one by one, banner seems to load all of them at the same time.
Still do notice that the posters are stored as high-res (1000x185) even though it only needs 360x66
@p0psicles could we get a script added to auto convert the banners, posters, etc. to the needed sizes? I feel this would help a LOT with the page rendering as we could then set the size on the element and reflow wouldn't need to kick in at all after the images are loaded as they'll match the source size. No scaling needed by the browser.
Maybe you could leverage Pillow which has a lot of capability to handle images. https://pillow.readthedocs.io/en/stable/
Seems to be that only posters are being copied to the thumbnails folder but banners and fanart are not at the moment.
I'll have to analyze the issue first. Before throwing libraries at it.
For the posters you'll always need a minimum size as the user control the width of these images them self in the poster layout
Very good point 😄 because I resized mine manually, they don't fit any more
It would make sense to pick the biggest size they can be and use that as thumbnail size 238x350.
Describe the bug
Loading /home (with the list of shows) takes between 5.2 and 8.5s.
I have been discussing and sharing database and log dumps with @p0psicles on this issue.
To Reproduce Steps to reproduce the behaviour:
It does not make any difference which browser you use on the client side but I've tried: Firefox 81 on a new i9 MBP, Chrome stable on the same machine, Safari on iOS 13 and 14 on an iPhone 11 Pro.
Note: I have also had a friend who is able to reproduce the poor performance - although not quite as bad, at around 2.5-3s - running the latest Medusa Docker image and on a wired network with both Firefox and Chrome.
Expected behaviour
The /home page should not take longer than 1-2 seconds at most to load.
Screenshots
Medusa (please complete the following information):
b352bb6924afcdfafce176a540d53ce405ca1312
Debug logs (at least 50 lines): General > Advanced Settings > Enable debug
Additional context
The more columns you unselect (as pictured below) the faster the page loads, for example with no columns ticked the page loads in just over 1s, but with all ticked it can take as long as 8.5s.
I've tried: