Closed jensjoha closed 1 year ago
I have taken a full-heap dump of the process and looked at it in WinDBG. The difference between RSS and the Dart heap is all in the native heap.
0:000> !heap -s
************************************************************************************************************************
NT HEAP STATS BELOW
************************************************************************************************************************
LFH Key : 0xf51b606bcc994274
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
000001b5d2060000 00000002 470020 462504 469628 16943 939 87 44 2e0 LFH
000001b5d1ef0000 00008000 64 4 64 2 1 1 0 0
000001b5d21a0000 00001002 1472 92 1080 16 8 2 0 0 LFH
000001b5d3a80000 00000002 60 8 60 3 1 1 0 0
-------------------------------------------------------------------------------------
I asked WinDBG to produce some statistics (using !heap -h
) and there is a large number of 64k chunks of memory allocated:
0:000> !heap -s -h 000001b5d2060000
Walking the heap 000001b5d2060000 .................................
0: Heap 000001b5d2060000
Flags 00000002 - HEAP_GROWABLE
Reserved memory in segments 469628 (k)
Commited memory in segments 462412 (k)
Virtual bytes (correction for large UCR) 465208 (k)
Free space 16943 (k) (939 blocks)
External fragmentation 3% (939 free blocks)
Virtual address fragmentation 0% (87 uncommited ranges)
Virtual blocks 44 - total 118820 KBytes
Lock contention 736
Segments 1
...
Default heap Front heap Unused bytes
Range (bytes) Busy Free Busy Free Total Average
------------------------------------------------------------------
...
65536 - 66560 6062 1 0 0 97056 16
...
------------------------------------------------------------------
Total 7578 939 62309 25302 1063842 15
This points me towards OverlappedBuffer
's used by dart:io
(because they are 64k sized). I can see at least one possible leak there just by looking at the code - but I don't yet know what exactly is happening.
We are leaking data_ready_
buffers when destroying DirectoryWatchHandle
.
Hah, after finding it didn't seem to apply to Linux I started (and is currently) running an instance where I've disabled creating new watches in the analyzer (and just reuses the old ones) --- and yeah, it seems to stop the leak (although only on iteration 70 at this point).
With that hack on the analyzer my Windows run after 250 iterations ends up at
Process
[...]
current memory 1.45GB
peak memory 1.54GB
[..]
VM
[...]
current memory 1.34GB
i.e. ~0.11GB unaccounted for (which was also the starting point above) --- so it all seems to be related to the .watch
thing.
Any updates on this one?
I have a fix - though I got stuck in some refactoring. Might consider landing without refactoring if I don't unstuck.
With https://dart-review.googlesource.com/c/sdk/+/309722 applied and running this script in Windows (on Linux we'll run into https://github.com/dart-lang/sdk/issues/52703) located as
.\pkg\analysis_server\tool\lspPkg.dart
:It opens up an analyzer session with the
pkg
package, then edits apubspec.yaml
file 250 times (it takes maybe 20 minutes to get there, but the leak can be seen before that), triggering the analyzer to do some work.Opening up the observatory for the analyzer process via http://127.0.0.1:8181/ we can observe how the process memory increases. For me I get these numbers:
Initially (i.e. after the script eventually says
isAnalyzing is now done after 0:02:58.668936
andShould now be initialized.
):So initially ~0.11GB is unaccounted for.
After 25 rounds I get:
So now ~0.27GB is unaccounted for.
After 50 rounds I get:
So now ~0.44GB is unaccounted for.
After 100 rounds I get:
So now ~0.76GB is unaccounted for.
After 250 rounds I get:
So now ~1.72GB is unaccounted for.
Once the analyzer is done rebuilding it lands at
and thus ~1.73GB unaccounted for.
Forcing GCs via Observatory doesn't change anything.
/cc @mkustermann @mraleph