OKTW-Network / FabricProxy-Lite

Fabric mod for support forward player data from velocity.
https://modrinth.com/mod/fabricproxy-lite
MIT License
81 stars 43 forks source link

Memory leak issue #60

Closed tanetakumi closed 1 year ago

tanetakumi commented 1 year ago

I would like to discuss the memory leak issue when connecting from velocity to fabric.

I have several paper and fabric servers connected to velocity. Among them, only the fabric server is going up in memory usage every time a player enters. The environment is as follows. Ubuntu 22.04 openjdk version "17.0.7" 2023-04-18 Device memory 96G Minecraft version 1.19.4 ~ 1.20.1 Startup java options

$JAVA -jar -Xms$MEMORY -Xmx$MEMORY \
-XX:+UnlockExperimentalVMOptions \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:MaxGCPauseMillis=200 \
-XX:+DisableExplicitGC \
-XX:+AlwaysPreTouch \
-XX:G1NewSizePercent=30 \
-XX:G1MaxNewSizePercent=40 \
-XX:G1HeapRegionSize=8M \
-XX:G1ReservePercent=20 \
-XX:G1HeapWastePercent=5 \
-XX:G1MixedGCCountTarget=4 \
-XX:InitiatingHeapOccupancyPercent=15 \
-XX:G1MixedGCLiveThresholdPercent=90 \
-XX:G1RSetUpdatingPauseTimePercent=5 \
-XX:SurvivorRatio=32 \
-XX:+PerfDisableSharedMem \
-XX:MaxTenuringThreshold=1 \
-XX:+UnlockDiagnosticVMOptions \
-XX:+DebugNonSafepoints \
-XX:NativeMemoryTracking=summary \
`ls | grep .jar` nogui

To investigate this problem, I used NativeMemoryTracking to find out which category was consuming the most. This is the result.

Total: reserved=11081MB, committed=9806MB
       malloc: 1005MB #781058
       mmap:   reserved=10076MB, committed=8802MB

-                 Java Heap (reserved=8192MB, committed=8192MB)
                            (mmap: reserved=8192MB, committed=8192MB) 

-                     Class (reserved=1027MB, committed=20MB)
                            (classes #26045)
                            (  instance classes #24883, array classes #1162)
                            (malloc=3MB #91169) 
                            (mmap: reserved=1024MB, committed=16MB) 
                            (  Metadata:   )
                            (    reserved=128MB, committed=99MB)
                            (    used=98MB)
                            (    waste=1MB =0.63%)
                            (  Class space:)
                            (    reserved=1024MB, committed=16MB)
                            (    used=16MB)
                            (    waste=0MB =3.03%)

-                    Thread (reserved=138MB, committed=11MB)
                            (thread #139)
                            (stack: reserved=138MB, committed=11MB)

-                      Code (reserved=250MB, committed=143MB)
                            (malloc=8MB #34257) 
                            (mmap: reserved=242MB, committed=135MB) 

-                        GC (reserved=400MB, committed=400MB)
                            (malloc=63MB #66642) 
                            (mmap: reserved=337MB, committed=337MB) 

-                  Compiler (reserved=1MB, committed=1MB)
                            (malloc=1MB #3366) 

-                  Internal (reserved=1MB, committed=1MB)
                            (malloc=1MB #38440) 

-                     Other (reserved=894MB, committed=894MB)
                            (malloc=894MB #796) 

-                    Symbol (reserved=19MB, committed=19MB)
                            (malloc=18MB #540913) 
                            (arena=2MB #1)

-    Native Memory Tracking (reserved=12MB, committed=12MB)
                            (tracking overhead=12MB)

-        Shared class space (reserved=16MB, committed=12MB)
                            (mmap: reserved=16MB, committed=12MB) 

-                 Metaspace (reserved=129MB, committed=99MB)
                            (malloc=1MB #237) 
                            (mmap: reserved=128MB, committed=99MB) 

The results show that the Other category is the source of memory leaks. The Other category is related to DirectByteBuffer. Maybe the Buffer used to process some packets is not being released. However, I could not figure it out.

As additional information, changing the value of network-compression-threshold will also change the amount of increase. The increase is quite large when network-compression-threshold=-1.

Finally, these do not seem to be caused by the current FabricProxy-Lite mod, but I wonder if some additional processing is needed when connecting to velocity?