Closed bbpennel closed 8 months ago
@bbpennel Thanks for reporting! I will look into it soon.
I am wondering if it could be related to a combination of the fact we are creating new MatomoTrackers for every request, and this block where it looks like a new FixedThreadPool gets initialized for each tracker but doesn't appear to get shut down? https://github.com/matomo-org/matomo-java-tracker/blob/main/java11/src/main/java/org/matomo/java/tracking/Java11SenderProvider.java#L25
It seems like we should probably reuse a single instance of MatomoTracker
I was able to replicate the issue on our test server after about 16k requests. I switched over to initializing a single MatomoTracker and haven't seen the issue recur after about 64k requests. So our usage was the main issue, but it might still be a good idea to add a method for shutting down the executor for cases where clients use multiple MatomoTrackers.
I am wondering if it could be related to a combination of the fact we are creating new MatomoTrackers for every request, and this block where it looks like a new FixedThreadPool gets initialized for each tracker but doesn't appear to get shut down? https://github.com/matomo-org/matomo-java-tracker/blob/main/java11/src/main/java/org/matomo/java/tracking/Java11SenderProvider.java#L25
It seems like we should probably reuse a single instance of MatomoTracker
Yeah, I think that's the reason for it. MatomoTracker should be reused.
I was able to replicate the issue on our test server after about 16k requests. I switched over to initializing a single MatomoTracker and haven't seen the issue recur after about 64k requests. So our usage was the main issue, but it might still be a good idea to add a method for shutting down the executor for cases where clients use multiple MatomoTrackers.
Thanks so much! I will look how I can improve that.
@bbpennel The MatomoTracker class now implements AutoClosable to ensure, that the users close it after usage. This is blocking and will shutdown the threads used by async requests and frees the memory. Thanks for making Matomo Java Tracker better!
Describe the bug We recently updated to version 3 of the matomo-java-tracker library and have started having issues with resources being exhausted. We are sending a server side event to matomo when one of our API endpoints is accessed. Initially we were using the jre8 jar before we realized there was a separate jre11 version, but we are still experiencing the issue after switching. It seems to cause our server to run out of resources and require a restart periodically. The error looks like:
Code snippets Our code that triggers the events is found here: https://github.com/UNC-Libraries/box-c/blob/v5.31.2/web-common/src/main/java/edu/unc/lib/boxc/web/common/utils/AnalyticsTrackerUtil.java#L67-L93 Which is very similar to our usage prior to matomo-java-tracker v3: https://github.com/UNC-Libraries/box-c/pull/1655/files
Expected behavior Sending events should not cause our application to stop working.
Additional context It seems like the resource exhaustion started happening shortly after we made adjustments to how the visitorId was being set since we were getting many of the following errors:
Which was being triggered by bot traffic when
"Proxy-Client-IP: 0.0.0.0"
was set, which caused us to pass a null visitorId to matomo. We changed it to passing in a randomly generated visitorId, which resolved theinputHex is marked non-null but is null
error, but no resources are being exhausted instead, possibly because many more requests are being sent to matomo.I'm trying to replicate the issue on our testing servers but haven't been able to do so yet, but we've had it happen in production twice in a week.
We have not changed the default
threadPoolSize
or any of the timeouts, but it seems like increasing the number might make it exhaust all threads sooner?