Open TheFrenchGhosty opened 4 years ago
A memory leak is probably not true. There are probably multiple ones.
i get memory leaks precisely on 12AM every night and if i change time zone in settings it reoccurs on 12AM for new time zone even if it is actually a bright day outside - looks like intentional malicious code is injected in about half of the instances
looks like intentional malicious code is injected in about half of the instances
Why does it look like that? Never heard of this "12AM memory leak" before. Do you run backups at 12AM? Do you run some other cronjobs at this time? Does it also happen on a new clean server?
@GitWaifu
looks like intentional malicious code is injected in about half of the instances
So, according to you, omarroth intentionally added code to trigger a memory leak that end up crashing the instances, because he absolutely don't want you to host it?
Seriously? Don't use the software if you don't like it, but stop saying bugs are caused by malicious intent.
PS: read that: https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
there is a way to run a valgrind or something similar ?
Malicious intent even if it's probably not the first cause should not be exclude. That kind of product make shadow to google and as they already did with useragent for Firefox or internet explorer. Why they not try by all ways to shutdown this kind of project. Also not in this ecosystem but in javascript one several lib were and will compromise it's the side effect of dependancies managers (should not be used if you want my point of view).
The instance with trouble may use an installer (i did) and third party can in this case be the root cause to.
i get memory leaks precisely on 12AM every night and if i change time zone in settings it reoccurs on 12AM for new time zone even if it is actually a bright day outside
Sounds more like invidious does not handle day change to well and somewhere there is leaking memory.
I have done a test on an instance on localhost (so I am sure nobody can connect to it). I launch it, watch a video and then I stop using it (I even close the page in the browser to make sure no javascript is running in the back).
I can see that the invidious
process is using some CPU every 1 to 2 min and just after the consumption of the container increase by ~2MB and never drop.
Not sure if it helps.
@SuperSandro2000
i get memory leaks precisely on 12AM every night and if i change time zone in settings it reoccurs on 12AM for new time zone even if it is actually a bright day outside
Sounds more like invidious does not handle day change to well and somewhere there is leaking memory.
Really interesting... it indeed might be a thing, it's a bit strange though...
ok so we have a start. an egrep on the code with value related to time can find something. a try with 2 minutes values (sec, min, millisec and hexa).
maybe it can be a bug with code implementation.
@doc75 when you test on localhost you mean without network ? if not or if there is no restriction on google ip somebody can always connect to your device in bad code case.
@HumanG33k , when I tested with local network I meant that i had a network connected to the internet, but for sure no incoming connection. You are right that invidious was able to connect to the internet by itself. The information I want to give is that it seems to leak also when none is connected.
I also notices that after a while it stops growing (so it might be linked to initialization), but I did not let it run for more than 7-8 hours.
I will try without internet access to see if the memory also increase.
Some more details can be found in this issue: https://github.com/iv-org/documentation/issues/241#issuecomment-1167236305
Please prioritize this. This is getting annoying.
I do not observe this. I have been running Invidious in Docker in unprivileged LXC (on ZFS) for two years, only upgraded once in a while with Watchtower. I never restart and it never crashes.
I did a manual check to verify if there is any change in memory consumption on my Hypervisor during a docker compose down && docker compose up -d
, together with an update. This is the graph:
I issued the down & up command at 6:41 AM and no impact can be seen.
Thank you for your input @Sieboldianus.
There is actually still some memory leaks/issues that can easily be reproducible when running invidious under high load like on a public instance.
It's great though that on a private instance, low traffic one, that there is no issue anymore.
I have a public instance with at most 10 users hosted on a tiny machine with 1GB of RAM. The memory causes a crash every few hours
It isn't instantly obvious but builds up over time. It went from 56% usage to 70% usage after 10 videos but doesn't drop down
I wanted to bring in my graphs and logs to help, since not everyone will have the capacity to let invidious live out his memory leak. If desired, I can also provide further logs, but you would have to tell me which ones I should read out and how best.
This has already been reported multiples time, but like #1345 this is an issue made to regroup other issues where this problem was reported.
This is also the issue where we will keep track of the progress in fixing it.
Invidious has a massive memory leak.
As shown in this screenshot...
...the memory usage of invidious can increase to 4GB in 15 minutes, starting from less than 500MB, the dips in memory usage are restarts or crashes, another issue related to crashes is available here: #1439.
The speed at which the memory leaks happen depend on the number of users of an instance. The instance of the screenshot is a public instance used by thousands of users.
Previous issue where this was reported: #1415 #721
Previous discussion about it: #1051