Closed chkl closed 6 years ago
valgrind --tool=massif --pages-as-heap=yes vdiff --log-level=debug -d /database/dump.db --budget=100 --timeout=1 --strategy=smart /benchmarks/c/array-industry-pattern/array_shadowinit_true-unreach-call_true-termination.i
looks like peak memory is ~400K
Using RTS -s
, shows for that specific use case indeed very high numbers. Example (Ctrl+C after a few minutes)
6,851,218,724,960 bytes allocated in the heap
34,397,558,488 bytes copied during GC
4,078,232,712 bytes maximum residency (23 sample(s))
271,907,704 bytes maximum slop
9106 MB total memory in use (485 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 6465140 colls, 0 par 47.760s 65.971s 0.0000s 3.7486s
Gen 1 23 colls, 0 par 0.000s 0.002s 0.0001s 0.0017s
TASKS: 5 (1 bound, 4 peak workers (4 total), using -N1)
SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.000s ( 0.001s elapsed)
MUT time 59.007s ( 83.377s elapsed)
GC time 47.760s ( 65.973s elapsed)
EXIT time 0.003s ( 0.020s elapsed)
Total time 106.770s (149.370s elapsed)
Alloc rate 116,109,179,766 bytes per MUT second
Productivity 55.3% of total user, 55.8% of total elapsed
gc_alloc_block_sync: 0
whitehole_spin: 0
gen[0].sync: 0
gen[1].sync: 0
I think I found the culprit:
On the server I can currently observer vdiff using 15 GiB (!) of memory. To reproduce:
Some observations:
--verifiers="always-Sat"
Current experiment: running with
--timeout=1
for faster throughput, and so far I cannot see the memory leak. Maybe the leak is inTimed.hs
? (maybe in the custom pipe handling there)...