Closed gianm closed 7 years ago
Documentation for v0.9.2-rc1 is not available right now, any chance to upload it?
@MdeArcayne, we haven't actually finished releasing 0.9.2-rc1 due to some technical difficulties with the web site, see: https://groups.google.com/forum/#!topic/druid-development/aYBS7wQHho8.
Hope to have this done soon and announced on the mailing list.
@gianm Thanks for the quick answer, keep up the good work!
0.9.2-rc1 announced: https://groups.google.com/d/topic/druid-user/7LY8PUqGuAA/discussion
Performance results look good on our query systems for rc1. Pretty significant topN query performance improvement.
@drcrallen It would be nice if you also show the group by compare. Thanks.
@giaosudau We don't use group-by in a production environment.
@giaosudau in our internal benchmarks groupBys are 2-5x faster
Note: Upgrade ordering is very important here to ensure cardinality aggregators work appropriately after https://github.com/druid-io/druid/pull/3406
Final notes up at https://github.com/druid-io/druid/releases/tag/druid-0.9.2
I wonder what the topn performance improvement in above screenshot is due to? Is it the new long encodings or the loop unrolling that's doing that?
We did some early tests that aren't concluded yet, but while the improvement for hyperloglogs was immediately noticeable, the segment scan times for topn queries seemed to be mostly the same as before. Is there something that needs to be specifically taken care of, like special inlining directives for jvm or using the new segment encoding options, or special topn query scenarios that bring out the perf difference more than others? thanks
DRAFT
Druid 0.9.2 contains hundreds of performance improvements, stability improvements, and bug fixes from over 30 contributors. Major new features include a new groupBy engine, ability to disable rollup at ingestion time, ability to filter on longs, new encoding options for long-typed columns, performance improvements for HyperUnique and DataSketches, a query cache implementation based on Caffeine, a new lookup extension exposing fine grained caching strategies, support for reading ORC files, and new aggregators for variance and standard deviation.
The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.9.2
Documentation for this release is here: http://druid.io/docs/0.9.2/
Highlights
New groupBy engine
Druid now includes a new groupBy engine, rewritten from the ground up for better performance and memory management. Benchmarks show a 2–5x performance boost on our test datasets. The new engine also supports strict limits on memory usage and the option to spill to disk when memory is exhausted, avoiding result row count limitations and potential OOMEs generated by the previous engine.
The new engine is off by default, but you can enable it through configuration or query context parameters. We intend to enable it by default in a future version of Druid.
See "implementation details" on http://druid.io/docs/0.9.2/querying/groupbyquery.html#implementation-details for documentation and configuration.
Added in #2998 by @gianm.
Ability to disable rollup
Since its inception, Druid has had a concept of "dimensions" and "metrics" that applied both at ingestion time and at query time. Druid is unique in that it is one of the only databases that supports aggregation at data loading time, which we call "rollup". But, for some use cases, ingestion-time rollup is not desired, and it's better to load the original data as-is. With rollup disabled, one row in Druid will be created for each input row.
Query-time aggregation is, of course, still supported through the groupBy, topN, and timeseries queries.
See the "rollup" flag on http://druid.io/docs/0.9.2/ingestion/index.html for documentation. By default, rollup remains enabled.
Added in #3020 by @kaijianding.
Ability to filter on longs
Druid now supports sophisticated filtering on integer-typed columns, including long metrics and the special __time column. This opens up a number of new capabilities:
Druid does not yet support grouping on longs. We intend to add this capability in a future release.
Added in #3180 by @jon-wei.
New long encodings
Until now, all integer-typed columns in Druid, including long metrics and the special __time column, were stored as 64-bit longs optionally compressed in blocks with LZ4. Druid 0.9.2 adds new encoding options which, in many cases, can reduce file sizes and improve performance:
The default remains "longs" encoding + "lz4" compression. In our testing, two options that often yield useful benefits are "auto" + "lz4" (generally smaller than longs + lz4) and "auto" + "none" (generally faster than longs + lz4, file size impact varies). See the PR for full test results.
See "metricCompression" and "longEncoding" on http://druid.io/docs/0.9.2/ingestion/batch-ingestion.html for documentation.
Added in #3148 by @acslk.
Sketch performance improvements
New extensions
And much more!
The full list of changes is here: https://github.com/druid-io/druid/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aclosed%20milestone%3A0.9.2
Updating from 0.9.1.1
Rolling updates
The standard Druid update process described by http://druid.io/docs/0.9.2/operations/rolling-updates.html should be followed for rolling updates.
Query time lookups
The druid-namespace-lookup extension, which was deprecated in 0.9.1 in favor of druid-lookups-cached-global, has been removed in 0.9.2. If you are using druid-namespace-lookup, migrate to druid-lookups-cached-global before upgrading to 0.9.2. See our migration guide for details: http://druid.io/docs/0.9.1.1/development/extensions-core/namespaced-lookup.html#transitioning-to-lookups-cached-global
Other notes
Please note the following changes:
mapreduce.job.classloader
ormapreduce.job.user.classpath.first
options. In testing we have found this to be an effective workaround. See http://druid.io/docs/0.9.2/operations/other-hadoop.html for details.chatThreads
in the supervisor tuning configuration to a value greater than the number of running tasks as a workaround.Credits
Thanks to everyone who contributed to this release!
@acslk @AlexanderSaydakov @ashishawasthi @b-slim @chtefi @dclim @drcrallen @du00cs @ecesena @erikdubbelboer @fjy @Fokko @gianm @giaosudau @guobingkun @gvsmirnov @hamlet-lee @himanshug @HyukjinKwon @jaehc @jianran @jon-wei @kaijianding @leventov @linbojin @michaelschiff @navis @nishantmonu51 @pjain1 @rajk-tetration @SainathB @sirpkt @vogievetsky @xvrl @yuppie-flu