Closed grahamburgsma closed 4 years ago
@grahamburgsma is it growing in time? I've just enabled RAM tracking on one of my production droplets to see chart and check if something goes wrong in time.
@grahamburgsma is it growing in time?
I think so, that first spike is sending about 20 notifications. If it was only one at a time more spread out I think it would be more gradual. Although it doesn't seem linear, that second spike from 110 to 120 was the same amount of notifications, but only jumped 10mb. The next time it tried sending notifications the process was killed though.
I've checked RAM for one last hour. If I disable my Vapor app then RAM is used for 200Mb (it is Ubuntu 20.04, nginx, postgres), then after Vapor app launch RAM usage is about 220Mb, after sending a bunch of notifications it grows up to 255Mb and never goes higher. So my app's RAM usage is around 60Mb overall. @grahamburgsma How your logic for sending pushes look like?
I've checked RAM for one last hour. If I disable my Vapor app then RAM is used for 200Mb (it is Ubuntu 20.04, nginx, postgres), then after Vapor app launch RAM usage is about 220Mb, after sending a bunch of notifications it grows up to 255Mb and never goes higher. So my app's RAM usage is around 60Mb overall.
Regardless, sending notifications should not balloon the memory usage that much. Even if it spikes, it should be released and go back down to what it was before, no?
@grahamburgsma How your logic for sending pushes look like? Pretty straight forward, just
application.fcm.send(message)
. Sometimes it is in a loop which get's flattened, but that shouldn't be a problem. I've tried it from route handlers and using a job with queues, both have the same issue.
Looks good in profiler 🙂
if to look at the same thin in debug navigator or activity monitor it is only growing, while at the same time in profiler it shows that everything is ok and it is allocating. It looks like maybe it is by design? Like ARC works well but RAM is still reserved by the app.
There appears to be a large memory leak caused by sending notifications.
Here is a graph of the memory usage when running on AWS. When restarted it uses ~10mb, but at sending the first few notifications it jumps to ~120mb and never goes back down. I looked through the code and didn't see anything obvious, would appreciate help tracking down this issue.
FCM: 2.7 Docker Image: swift:5.2-bionic-slim