Closed waseric closed 2 years ago
Over course of about 4 hours, grows to over 40G mem used
If your JVM is using that much memory, that's not the heap and cannot be the heap; There has been cases of reported native leaks somewhere over the years but nobody has been able to track down the source or even find a sane means of replicating it, nothing between 388 and 389 could have contributed towards a native memory leak
I hesitated to bother reporting because I haven't the expertise to adequately describe it. It's a new behavior in 391, impactful enough we had to revert. Any guidance on how to better characterize it appreciated, otherwise we can close. There's something there, but if I can't gather useful data, not much point in keeping this open.
Thanks for your time.
Thank you very much for the detailed and well-thought-through report; however, no changes between 388 and 391 could have possibly caused this. Something else must have changed here to cause this.
@waseric did you ever find the cause of this? our server is having the same issue.
@waseric did you ever find the cause of this? our server is having the same issue.
Sadly not. However, I do believe it is not related to this build, and most likely not to paper.
Sharing notes, in case it's at all helpful: It IS a native memory leak, as java heap is consistently within the allocated size. We've had a total of 5 of these events in the past 30 days - first of which pre-dated this build. User volume (high or low) is not a trigger. Specific users are not a trigger. We do not have good tools for analyzing/assessing native memory leak, but I can confirm that leaked file handles are not a symptom. And finally, only one of our servers has this issue, among many which all run the same software stack. Our owner has noted that the Jobs Reborn plugin has been cause of memory leaks in the past. I have no evidence to confirm that is the issue - it's one of the plugins unique to the affected server. That's my next area of focus.
We only have 1 server but we can also confirm the native leak, I haven't done any testing to single out what plugin is causing it however
we upgraded on dec 10 and the issues started happening on dec 14 so it may be another plugin...
Timings or Profile link
https://timings.aikar.co/?id=720d1dde377944f0b59b8a86fd5c51e7
Description of issue
After switch to build 389, memory leak started to occur in one server. server has 4G allocated. Over course of about 4 hours, grows to over 40G mem used. Server eventually fails. Stop, restart, behavior repeats. Switched back to 388 at this point (based on nature of commit 3e73355 and heap dump comparison highlighting leakage in LWCX plugin). Issue no longer occurs.
Plugin and Datapack List
[00:55:11 INFO]: Plugins (36): AncientGates, AreaShop, BanManager, BossShopPro, BungeeTabListPlus, ChatControl, CheckNameHistory, CMILib, CoreProtect, DiscordSRV, dynmap, Essentials, EssentialsSpawn, EventLogger, FastAsyncWorldEdit (WorldEdit), floodgate, GriefPrevention, HubKick, Jobs, LeaderHeadsRevamped, LuckPerms, LWC, Multiverse-Core, NashornPlus, OreAnnouncer, PlaceholderAPI, Plan, PremiumVanish, ProtocolLib, ServerSigns, Skript, sthorses, Vault, ViaVersion, WorldBorder, WorldGuard
[00:55:36 INFO]: There are 5 data packs enabled: [vanilla (built-in)], [file/bukkit (world)], [file/extra-mob-heads-1.17 (world)], [file/dragon drops-1.17 (world)], [file/double shulker shells-1.17 (world)] [00:55:36 INFO]: There are 3 data packs available: [file/extra-mob-heads (world)], [file/extra-mob-heads-1.15 (world)], [file/more-mob-heads-2.3.0 (world)]
Server config files