Closed rafsko1 closed 1 year ago
How much ram in MB/GB is it actually using? We can't do much with just a percentage.
Same probleme here
Updated to 1.66.1 2 days ago
Here are my current docker stats
You can clearly see the difference since the last update
@weber8thomas which version were you running before you updated?
Now its consuming 17%, 1.25 GB / 7.63 GB but ive seen it consuming almost 6gb / 8gb
@weber8thomas which version were you running before you updated?
I was running 1.65.0
I don't think anything unusual is happening here - ML is expected to use quite a bit of RAM while processing is running, and after a bit of inactivity the models will be unloaded and the RAM usage will go down.
I don't think anything unusual is happening here - ML is expected to use quite a bit of RAM while processing is running, and after a bit of inactivity the models will be unloaded and the RAM usage will go down.
The point is that no processes are running (CPU between 0 & 1% on docker stats) and no jobs listed on the admin dashboard of the WEB UI. So that's why this is unusual compare to the previous versions.
Same probleme here
Updated to 1.66.1 2 days ago
Here are my current docker stats
You can clearly see the difference since the last update
Please make a new issue for this. While it's expected that ML will use a high amount of RAM, there are unusual spikes here. Also be sure to mention the version you were using before updating and to post the ML logs.
Exactly same story here
Now its 63%, 4.81 GB / 7.63 GB and doesn't look like that will drop.
@bo0tzz
I don't think anything unusual is happening here - ML is expected to use quite a bit of RAM while processing is running, and after a bit of inactivity the models will be unloaded and the RAM usage will go down.
Sorry, but I can confirm that no unloading takes place within a reasonable timeframe and the RAM hogging / memleak goes further when another set of jobs run on a new occassion. (See attached image as proof)
Please investigate and fix this issue as soon as possible.
Meanwhile I'm thinking on creating a cronjob restarting the ML container every hour as a temporary workaround. Can it cause any trouble?
Running a cronjob for it shouldn't cause an issue.
I think model unloading is causing a memory leak. The first time they're unloaded you can see a small decrease, but the next time RAM usage swells up further.
I dont think this is fixed. I am on 1.71.0 and I am still seeing machine learning taking up around 1.6G of memory out of 8G.
Same here. 27%, 2.05 GB / 7.63 GB
Is there a way to disable machine learning. IMMICH_MACHINE_LEARNING_URL=false and removing machine learning container didnt seem to help as immich server kept crashing.
That memory usage is completely normal.
Is there a way to disable machine learning. IMMICH_MACHINE_LEARNING_URL=false and removing machine learning container didnt seem to help as immich server kept crashing.
Could you share the logs for the server?
Ok, I thought the container is supposed to unload the models once it is idle based on the discussion above. Is there a documentation on how to switch off machine learning.
Thanks..
Model unloading is currently disabled by default since it can cause a memory leak.
As for disabling machine learning, the steps you mention are all that should be needed. If the server is crashing, I'd need to see the logs to help you.
Model unloading is currently disabled by default since it can cause a memory leak.
As for disabling machine learning, the steps you mention are all that should be needed. If the server is crashing, I'd need to see the logs to help you.
If the models are not unloading, can the model with machine learning mean it can keep increasing ? That's what I am seeing. I also see an increase at midnight, so I suppose that's when some jobs are run. Didn't find where that was set. Example : 20:45 -> 1.130 gig 23:59 -> 893.371 mib 00:08 -> 2.590 gig 01:47 -> 2.294 gig 11:10 -> 2.470 gig keep increasing 19:45 -> 2.705 gig reboot 19:54 -> 1.093 gig 20:02 -> 2.086 gig
I am not sure if it's the same issue with model loading or something else.
Models are loaded on-demand now, so the container will have lower RAM usage until then. Models won't be unloaded after this by default, though. RAM usage can also vary based on the images sent and the number of concurrent requests.
The bug
yacht dashboar is showing that immich_machine_learning is consuming between 20%-60% of ram
The OS that Immich Server is running on
Debian
Version of Immich Server
v1.66.1
Version of Immich Mobile App
v1.66.0
Platform with the issue
Your docker-compose.yml content
Your .env content
Reproduction steps
Additional information
No response