Open Martinsdevs opened 3 months ago
There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram
It is. I have 8gb ram.
There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram
Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x 80 / 1000000) = 6505 megabytes.
However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.
Modern papers have spark built in, so run /spark profiler start --alloc --thread *
and upload the results here
There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram
Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x 80 / 1000000) = 6505 megabytes.
However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.
Modern papers have spark built in, so run
/spark profiler start --alloc --thread *
and upload the results here
No, under that logic why is com.mojang.datafixers.types.Type taking almost 5gb & com.mojang.datafixers.functions.Fold taking just over 4gb.
There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram
Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x 80 / 1000000) = 6505 megabytes.
However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents.
Modern papers have spark built in, so run
/spark profiler start --alloc --thread *
and upload the results here
I am not sure how you did the math on whatever that is but 81319576 bytes of memory, depending on the platform, is anywhere between 77 to 81 mb of ram. My point still stands
It is. I have 8gb ram.
Your jvm is running on 1gb, notice how the pe objects are using 8% of your memory, while being 80 mb
https://spark.lucko.me/rORVhYTRMh Here it is. It can be that I have something wrong ofc. I don't doubt that either.
I use libs disguise, with the packetevent plugin
https://spark.lucko.me/N6SkNKch8M This is without those 2 plugins
There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram
Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x [80 / 1000000](tel:80 / 1000000)) = 6505 megabytes. However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents. Modern papers have spark built in, so run
/spark profiler start --alloc --thread *
and upload the results hereI am not sure how you did the math on whatever that is but 81319576 bytes of memory, depending on the platform, is anywhere between 77 to 81 mb of ram. My point still stands
Aren’t this is 81319576 instances? The only way to calculate this is to know how much 1 instance takes. I’m not very familiar with that profiler
https://spark.lucko.me/rORVhYTRMh Here it is. It can be that I have something wrong ofc. I don't doubt that either.
I use libs disguise, with the packetevent plugin
This profiler has “no data”
https://spark.lucko.me/CqJLMmwb81 here is a new report, with the plugins :)
https://spark.lucko.me/CqJLMmwb81 here is a new report, with the plugins :)
It looks like a third party problem. I see nothing wrong here
There's no memory leak, the data for all versions for WrappedBlockState is 80mb. Why are you running a 1.21 server on 1gb of ram
Assuming you did a typo and it’s 80 bytes (not 80 megabytes) per instance and it’s still going to be 6 gigabytes worth of ram (81,319,576 x [80 / 1000000](tel:80 / 1000000)) = 6505 megabytes. However I doubt that this is a packetevents issue, rather an issue of a plugin which uses packetevents. Modern papers have spark built in, so run
/spark profiler start --alloc --thread *
and upload the results hereI am not sure how you did the math on whatever that is but 81319576 bytes of memory, depending on the platform, is anywhere between 77 to 81 mb of ram. My point still stands
Aren’t this is 81319576 instances? The only way to calculate this is to know how much 1 instance takes. I’m not very familiar with that profiler
No, it is that many bytes, not instances, it says it after the % symbol. This is also a known number in pe that wrapped block states take up 81 mb of ram
Same issue here, a lot of WrappedBlockState, 700MB .hprof. Just create some worlds or load some chunks idk, put packetevents alone in the server and u can reproduce.
My servers were being killed by OOM suddenly, mem just goes up infinitely until it reaches the limit and dies. I tested it in a dedicated server, alone, only one server, only with packetevents, 2GB memory. Startup server, create some worlds or wait some players to join, wait 20m~30m and the server dies.
Are you sure this is caused by packetevents? Yes, packetevents will consume memory, like every other plugin. An actual "memory leak" is caused by something consuming potentially infinite amounts of memory, accumulating over time. Its not a memory leak if packetevents uses e.g. 80MiB to load some registries on startup. If you actually find a memory leak, please send a heapdump here or through Discord.
My servers were being killed by OOM suddenly
Aside from that, your OOM-killer-issue is probably caused by your container memory limit being too low and not accounting for JVM-/OS-overhead.
Same issue here, a lot of WrappedBlockState, 700MB .hprof. Just create some worlds or load some chunks idk, put packetevents alone in the server and u can reproduce.
My servers were being killed by OOM suddenly, mem just goes up infinitely until it reaches the limit and dies. I tested it in a dedicated server, alone, only one server, only with packetevents, 2GB memory. Startup server, create some worlds or wait some players to join, wait 20m~30m and the server dies.
My .hprof
.hprof header (i can send the entire .hprof if u need after removing internal code references)
- WrappedBlockState
- My Plugin that shades Packetevents
- Class loader
Tests - same result for all - tested with Latest paper build, Paper Legacy v1.8, PandaSpigot and ImanitySpigot too Machine specs: Dedicated - Ryzen 7950X 32T / 128GB / 1TB
1st test server specs: On Host - Linux ARM - JDK 17, G1GC, equal Xms and Xmx.
2nd test server specs: On Pterodactyl w/ Linux x64 - JDK 21, Parallel GC, equal Xms and Xmx limited to 85%.
3rd test server specs: On Host - Linux x64 - JDK 21, Generational ZGC (note: impossible to use ZGC , server dies even earlier), equal Xms and Xmx
4th test server specs: On Pterodactyl - Linux x64 - JDK 21, with Grim Anticheat that shades Packetevents, equal Xms and Xmx
If u need some research i can rec a video what happens in: 30m server running with and without packetevents
I have this exact same problem, hopefully it gets resolved.
Describe the bug Memory Leak: 100% ram get used very fast
Software brand 1.21.1 Paper. Great server host. No problem and runs fast.
Plugins Run tests with https://heaphero.io/. It told me clearly, where the memory leak was. (Tested this 2 times)
Also, tested this without the plugin. Came out without any leaks
28,046 instances of "java.lang.Class", loaded by "" occupy 241,775,368 (24.87%) bytes.
Biggest instances:
class com.github.retrooper.packetevents.protocol.world.states.WrappedBlockState @ 0x6269887a0 - 81,319,576 (8.36%) bytes. class com.mojang.datafixers.types.Type @ 0x618d59a90 - 62,428,800 (6.42%) bytes. class com.mojang.datafixers.functions.Fold @ 0x61e9139d8 - 53,648,016 (5.52%) bytes.
Expected behavior No memory leak. but it came.
Screenshots
Additional context