Open csabahenk opened 2 years ago
I'll have a look at this for my memory-efficient fork. Looks like it's probably reasonably straightforward to fix this (especially given the helpful gron.rb
example, thanks!)
With N=21, I've got it down to 12s and 459M. Without frequent runtime.GC()
calls (which obviously slows things down but does help - N=24 is 135s, 3.2G with a GC every 1M input lines; 116s, 3.6G without), I can't really see any obvious way to get the memory usage down any more.
That sounds cool. Do you plan to merge these adjustments to the code base? (If not yet suitable for prime time, then at least on a topic branch?)
I'll try and get a branch up on my fork this week.
Couple of days delay - had an epiphany earlier which might bring the memory usage right down if I can rearchitect the internals appropriately. But it looks promising from today's quick testing...
Up on my fork, f/reduce-memory
branch. gron -u -j -e
to activate the low memory variant.
n=21
is now at ~9s, 415M with a forced GC every 2M lines or 405M with GC every 1M lines (compared to ~11s, 360M for the Ruby.)
n=24
comes in at ~94s, 3.3G with 2M GC, or ~102s, 2.9G with 1M GC (compared with ~110s, 1.7G for the Ruby.)
Output verified as correct by feeding them all through jq -S | openssl dgst -blake2b512
and comparing the hashes.
Falls off a CPU (but not memory) cliff with n=25 - it's not the number of lines because I can feed it 75M lines of a different file with no problem. I suspect the depth of the structure and resultant cavalcade of pointers are causing the GC some internal discomfort. Investigations continue but it's mostly good to go for normal workloads now.
(All benchmarks run on a 2020 M1 Pro with arm64 binaries, Go 1.17)
Hi @csabahenk @rjp ,
as an alternative option you can try my project, https://github.com/adamritter/fastgron (written in C++).
It doesn't support -j (JSON based output format), but I converted the file that was outputted by ./bintreefold 21, and got these timings:
time gron -u g.gson > /dev/null gron -u g.gson > /dev/null 51.35s user 46.38s system 157% cpu 1:01.91 total
time fastgron -u g.gson > /dev/null
fastgron -u g.gson > /dev/null 2.38s user 0.34s system 97% cpu 2.796 total
(for the gron case fastgron is getting even more speedup, as it uses a SIMD optimized library for JSON parsing)
I'm using the following Ruby script named bintreefeed.rb as data generator:
It takes a single numeric argument N, and produces the gronned description of a complete binary tree of height N (using the JSON format). Example:
Increasing N we can observe that gron becomes extremely slow and bloated. Eg. the N=20 results are on my machine:
./bintreefeed 20 | gron -u -j -m > /dev/null
takes 30 seconds and peaks at 5Gb footprint. For comparison, I have a Ruby reimplementation of gron at https://github.com/csabahenk/gron_rb. Using its gron.rb script,./bintreefeed 20 | ./gron.rb -u > /dev/null
takes 7 second and the footprint is around 300 Mb. Checked also with N=21: gron runs 58 seconds with 14Gb footprint, while gron.rb runs 22 seconds with 480 Mb footprint.Notes:
a difference between gron and gron.rb, that with
-u
former produces indented JSON, while latter produces compact JSON. To get indented JSON with gron.rb, one can use the following oneliner (executed in the gron_rb repo):Using this, the runtime with N=20 increases to 10 seconds and the footprint to 380 Mb — not a significant difference.