Closed bmatsuo closed 10 years ago
I guess it's fine for those that might be interested, this is more credit to snappy-go and the algorithm than this implementation of the streaming format, though.
this is more credit to snappy-go and the algorithm than this implementation of the streaming format, though.
I agree completely. The thing that makes me think it deserves to be here is that the snappy-go:gzip comparison cannot be made directly using identical input data (block compression vs streaming). That and I think gophers would be inherently curious if they are hearing about snappy for the first time.
yea, it's fine...
you wanna bring these in for posterity?
I waffled on this a little. It may be better as a blog post or a gist or something. Possibly with more compression algorithms involved. There could just be a link to those results from README.md
The benchmarks take significant time. I tried to add a flag (i.e. go test -bench=. -compare-gzip
) so the benchmarks skip when not desired, which is most of the time. But b.Skip()
just makes the output look terrible (I hoped it would only look gross when -v was supplied). I'm not sure how it would play with benchcmp either.
What do you think?
yea, let's just move on... closing
@mreiferson do you have interest in adding benchmarks using gzip for direct comparison (using the same data)?
I was just too curious so I wrote some (branched off #6). Gzip is significantly slower at everything except decoding highly-compressible and incompressible data where it is approximately the same speed. Maybe that was to be expected. I didn't expected it with my naivety about the algorithms differences. Let me know if you want them in the repo and I'll open a PR after buffering is merged.