Closed dclim closed 5 years ago
Support for 64-bit floating point columns was release in Druid 0.11.0, so if this is enabled, versions older than 0.11.0 will not be able to read the data segments.
this was a little confusing when i read this first.
please consider rephrasing to:
64-bit doubles aggregators are now used by default (see #5478). Support for 64-bit floating point columns was only released in Druid 0.11.0.
If you are upgrading from a version older than 0.11 directly to 0.13, your nodes will not be able to read the data segments. Thus, for users upgrading from version older than 0.11, either:
1. disable and keep the old format. For this, you will need to set the system-wide property druid.indexing.doubleStorage=float.
2. First upgrade to 0.11, and then to 0.13
@pdeva thank you for the comment! maybe @nishantmonu51 can confirm, but I don't think that's exactly accurate. 0.13 will not have trouble reading segments generated before 0.11; the issue is that since the 64-bits doubles aggregator was added in 0.11, if you enabled this feature and then rolled back to a version of Druid prior to 0.11.0, that's when you would run into compatibility issues. Is the comment more clear now with that context?
i am actually more confused now.
is this correct, in a gist:
@pdeva
druid.indexing.doubleStorage=float
is set.Hence, for this specific feature, there is no issue upgrading from previous versions to 0.13, but if you don't explicitly configure 0.13 to use 32-bit floats to store double values, you will not be able to rollback to a version before 0.11.
@dclim I see on the dev@ list that you've proposed a release candidate! Exciting!
Has the RC been published to any maven repository so we could test it directly, or are there just the binary tarballs?
@glasser I'm working on getting the RC published to Apache Nexus and will let you know when this is done. There will definitely be an RC2 coming as well (with some bugfixes + some changes to conform to Apache's licensing policy)
@glasser Just to close the loop here - RC2 has now been published along with the proposed Maven artifacts. The artifacts are in a staging directory on Apache Nexus and will be promoted for release once that vote is approved. You can view them here: https://repository.apache.org/content/repositories/orgapachedruid-1000/
@glasser FYI - we discovered a critical bug hours after RC2 was published, so there is now an RC3. Maven artifacts for RC3 are here: https://repository.apache.org/content/repositories/orgapachedruid-1001/
@dclim I noticed in testing that with the switch to the AWS S3 SDK instead of jets3t, a non-EC2 cluster that is using S3 as deep storage fails to start up properly. The issue is that you need to have an AWS region set (unlike jets3t, it won't default to us-east-1).
One way to work around this is:
-Daws.region=us-east-1
to all jvm.configs.-Daws.region=us-east-1
to druid.indexer.runner.javaOpts in middleManager/runtime.properties.Probably setting the AWS_REGION=us-east-1 environment variable would help too, but I haven't tested that. I believe this is only an issue if you are not running on EC2. On EC2 it seems to pick up the current region as the default region.
@gianm thank you, I can add this to the release notes. I suppose this should also be added to the S3 extension documentation, but I'm not clear on whether that would necessitate another release candidate (since the documentation is packaged in the release artifacts).
the link to mysql doc does not describe how to get and where to place the mysql jdbc driver. it still refers to the old method of using pull deps to pull the entire extension
doesn’t the aws region property also need to be applied to historical configs, since they download from s3
@pdeva when 0.13.0 is released, the docs will be updated and /latest should point to the new docs that describe where to put the mysql driver. You can see the docs here: https://github.com/apache/incubator-druid/blob/master/docs/content/development/extensions-core/mysql.md
Yes, the AWS region property also needs to be applied to the historical configs. The release notes mention adding it to the jvm.config for all Druid services.
@pdeva when 0.13.0 is released, the docs will be updated and /latest should point to the new docs that describe where to put the mysql driver.
This sounded reasonable, but it has not happened. I have a tool watching for Druid releases and alerting me to them, so now that it has been released, I wanted to read the release notes. However, many links still point to now non-existing /docs/0.13.0-incubating/
pages, so I can't read the new documentation. I tried replacing the tag with latest
, but then I still see 0.12.3
documentation.
Could you please be careful to really point the latest
documentation correctly when doing releases? I've had issues with this several times in the past already. It can't possibly be in your best interest to publish exciting new releases only to have many dead links in the release notes. Thank you :-)
I don't think that 0.13.0 is officially released. If you look here: http://druid.io/downloads.html, the latest stable release is 0.12.3.
Interesting, I had not considered that possibility. All I'm looking at are the releases on GitHub.
Me too =) ...but I have to use the compiled package and so I found (sadly) that there is none
@pantaoran apologies for the confusion, @aleksi75 is correct that 0.13.0 isn't officially released quite yet, we're just working on some final steps to prepare. I probably was a bit quick on creating the release tag (I was trying to get the 0.13.0 links and documentation tested in a staging environment and created the tag so the corresponding links would work). The Druid website will be updated and announcements will be sent out when it's official (which should be very soon).
@pantaoran apologies for the confusion, @aleksi75 is correct that 0.13.0 isn't officially released quite yet, we're just working on some final steps to prepare. I probably was a bit quick on creating the release tag (I was trying to get the 0.13.0 links and documentation tested in a staging environment and created the tag so the corresponding links would work). The Druid website will be updated and announcements will be sent out when it's official (which should be very soon).
Hello, for this precise calculation, what are your good methods and plans?
@pantaoran @aleksi75 @love1693294577 just wanted to let you know that 0.13.0-incubating is now officially released! You can get it from: http://druid.io/downloads.html
what does 'incubating' stand for in the release name in this case? is this release not ready for production?
@pdeva 'incubating' denotes that Druid is currently under the Apache Incubator program (as opposed to having graduated to a full-fledged top level Apache project). All projects enter the Apache foundation through the incubator. It has nothing to do with quality or suitability for production. All GA druid releases are suitable for production usage.
@pantaoran @aleksi75 @love1693294577 just wanted to let you know that 0.13.0-incubating is now officially released! You can get it from: http://druid.io/downloads.html Yippie a yeah.
It seems that the released file is a little strange.
I downloaded the file from: http://mirrors.hust.edu.cn/apache/incubator/druid/0.13.0-incubating/apache-druid-0.13.0-incubating-bin.tar.gz http://mirrors.hust.edu.cn/apache/incubator/druid/0.13.0-incubating/apache-druid-0.13.0-incubating-bin.tar.gz SHA1: e923fa8d2f4c79b28218a401f11aa2d379f46c59
But the file seems abnormal:
There are some files having the incomplete file names. Is that something wrong during the distribution?
@dragonls I downloaded the file from 2 different mirrors, the SHA1 sum is the same as yours. When I extract it, everything seems fine. Did something go wrong during your decompression?
Emmm, interesting... Maybe there is something wrong during my decompression. Anyway, @pantaoran thanks a lot.
I tried tar -zxvf xxx.tar.gz
, all results are ok. Maybe that is a bug in 7zip.
@dclim We're using s3 as deep storage and we had an incident when upgrading druid.
Our s3 access policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "S3:*",
"Resource": [
"arn:aws:s3:::retracted-druid-dev/acceptance/*",
"arn:aws:s3:::retracted-druid-dev/acceptance-indexer-logs/*"
],
"Condition": {}
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::retracted-druid-dev"
],
"Condition": {}
}
]
}
Peons however started failing with:
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: DA6A50FF8662CAC3; S3 Extended Request ID: UvfZA/YEDbKKlgCUS6u4Tk0e9DTyZkLJqY2iPgYyYOCy08gnLPTkbN1HPy7kyaDd0tkwABIve7A=)
...
com.amazonaws.services.s3.AmazonS3Client.getBucketAcl(AmazonS3Client.java:1150) ~[aws-java-sdk-s3-1.11.199.jar:?]
at org.apache.druid.storage.s3.ServerSideEncryptingAmazonS3.getBucketAcl(ServerSideEncryptingAmazonS3.java:70) ~[?:?]
at org.apache.druid.storage.s3.S3Utils.grantFullControlToBucketOwner(S3Utils.java:199) ~[?:?]
It didn't even retry. The peons were shutdown and the data was lost.
We fixed it by adding s3:GetBucketAcl
to the policy. This probably should also be mentioned in the release notes.
Hi @indrekj, it's documented in the master branch, but our doc is not updated yet. Sorry for inconvenience.
Hi Any body help me to download druid 0.10.1 version
Thanks Prabakaran krishnan
http://static.druid.io/artifacts/releases/druid-0.10.1-bin.tar.gz should work.
On Thu, May 2, 2019 at 6:20 AM prabakarankrishnan notifications@github.com wrote:
Hi Any body help me to download druid 0.10.1 version
Thanks Prabakaran krishnan
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/apache/incubator-druid/issues/6442#issuecomment-488635817, or mute the thread https://github.com/notifications/unsubscribe-auth/ABDD2QU7RRA2SWX6AKFWWVTPTLE67ANCNFSM4F2RQB2Q .
-- Atul Mohan atulmohan.mec@gmail.com
Druid 0.13.0-incubating contains over 400 new features, performance/stability/documentation improvements, and bug fixes from 81 contributors. It is the first release of Druid in the Apache Incubator program. Major new features and improvements include:
The full list of changes is here: https://github.com/apache/incubator-druid/pulls?q=is%3Apr+is%3Aclosed+milestone%3A0.13.0
Documentation for this release is at: http://druid.io/docs/0.13.0-incubating/
Highlights
Native parallel batch indexing
Introduces the
index_parallel
supervisor which manages the parallel batch ingestion of splittable sources without requiring a dependency on Hadoop. See http://druid.io/docs/latest/ingestion/native_tasks.html for more information.Note: This is the initial single-phase implementation and has limitations on how it expects the input data to be partitioned. Notably, it does not have a shuffle implementation which will be added in the next iteration of this feature. For more details, see the proposal at #5543.
Added by @jihoonson in #5492.
Automatic segment compaction
Previously, compacting small segments into optimally-sized ones to improve query performance required submitting and running compaction or re-indexing tasks. This was often a manual process or required an external scheduler to handle the periodic submission of tasks. This patch implements automatic segment compaction managed by the coordinator service.
Note: This is the initial implementation and has limitations on interoperability with realtime ingestion tasks. Indexing tasks currently require acquisition of a lock on the portion of the timeline they will be modifying to prevent inconsistencies from concurrent operations. This implementation uses low-priority locks to ensure that it never interrupts realtime ingestion, but this also means that compaction may fail to make any progress if the realtime tasks are continually acquiring locks on the time interval being compacted. This will be improved in the next iteration of this feature with finer-grained locking. For more details, see the proposal at #4479.
Documentation for this feature: http://druid.io/docs/0.13.0-incubating/design/coordinator.html#compacting-segments
Added by @jihoonson in #5102.
System schema tables
Adds a system schema to the SQL interface which contains tables exposing information on served and published segments, nodes of the cluster, and information on running and completed indexing tasks.
Note: This implementation contains some known overhead inefficiencies that will be addressed in a future patch.
Documentation for this feature: http://druid.io/docs/0.13.0-incubating/querying/sql.html#system-schema
Added by @surekhasaharan in #6094.
Improved indexing task status, statistics, and error reporting
Improves the performance and detail of the ingestion-related APIs which were previously quite opaque making it difficult to determine the cause of parse exceptions, task failures, and the actual output from a completed task. Also adds improved ingestion metric reporting including moving average throughput statistics.
Added by @surekhasaharan and @jon-wei in #5801, #5418, and #5748.
SQL-compatible null handling
Improves Druid's handling of null values by treating them as missing values instead of being equivalent to empty strings or a zero-value. This makes Druid more SQL compatible and improves integration with external BI tools supporting ODBC/JDBC. See #4349 for proposal.
To enable this feature, you will need to set the system-wide property
druid.generic.useDefaultValueForNull=false
.Added by @nishantmonu51 in #5278 and #5958.
Results-level broker caching
Implements result-level caching on brokers which can operate concurrently with the traditional segment-level cache. See #4843 for proposal.
Documentation for this feature: http://druid.io/docs/0.13.0-incubating/configuration/index.html#broker-caching
Added by @a2l007 in #5028.
Ingestion from RDBMS
Introduces a
sql
firehose which supports data ingestion directly from an RDBMS.Added by @a2l007 in #5441.
Bloom filter support
Adds support for optimizing Druid queries by applying a Bloom filter generated by an external system such as Apache Hive. In the future, #6397 will support generation of Bloom filters as the result of Druid queries which can then be used to optimize future queries.
Added by @nishantmonu51 in #6222.
Additional SQL result formats
Adds result formats for line-based JSON and CSV and additionally
X-Druid-Column-Names
andX-Druid-Column-Types
response headers containing a list of columns contained in the result.Added by @gianm in #6191.
'stringLast' and 'stringFirst' aggregators
Introduces two complementary aggregators,
stringLast
andstringFirst
which operate on string columns and return the value with the maximum and minimum timestamp respectively.Added by @andresgomezfrr in #5789.
ArrayOfDoublesSketch
Adds support for numeric Tuple sketches, which extend the functionality of the count distinct Theta sketches by adding arrays of double values associated with unique keys.
Added by @AlexanderSaydakov in #5148.
HllSketch
Adds a configurable implementation of a count distinct aggregator based on HllSketch from https://github.com/DataSketches. Comparison to Druid's native HyperLogLogCollector shows improved accuracy, efficiency, and speed: https://datasketches.github.io/docs/HLL/HllSketchVsDruidHyperLogLogCollector.html
Added by @AlexanderSaydakov in #5712.
Support for multiple grouping specs in groupBy query
Adds support for the
subtotalsSpec
groupBy parameter which allows Druid to be efficient by reusing intermediate results at the broker level when running multiple queries that group by subsets of the same set of columns. See proposal in #5179 for more information.Added by @himanshug in #5280.
Mutual TLS support
Adds support for mutual TLS (server certificate validation + client certificate validation). See: https://en.wikipedia.org/wiki/Mutual_authentication
Added by @jon-wei in #6076.
HTTP based worker management
Adds an HTTP-based indexing task management implementation to replace the previous one based on ZooKeeper. Part of a set of improvements to reduce and eventually eliminate Druid's dependency on ZooKeeper. See #4996 for proposal.
Added by @himanshug in #5104.
Broker backpressure
Allows the broker to exert backpressure on data-serving nodes to prevent the broker from crashing under memory pressure when results are coming in faster than they are being read by clients.
Added by @gianm in #6313.
'maxBytesInMemory' ingestion tuning configuration
Previously, a major tuning parameter for indexing task memory management was the
maxRowsInMemory
configuration, which determined the threshold for spilling the contents of memory to disk. This was difficult to properly configure since the 'size' of a row varied based on multiple factors.maxBytesInMemory
makes this configuration byte-based instead of row-based.Added by @surekhasaharan in #5583.
Materialized views
Supports the creation of materialized views which can improve query performance in certain situations at the cost of additional storage. See http://druid.io/docs/latest/development/extensions-contrib/materialized-view.html for more information.
Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.
Added by @zhangxinyu1 in #5556.
Parser for Influx Line Protocol
Adds support for ingesting the Influx Line Protocol data format. For more information, see: https://docs.influxdata.com/influxdb/v1.6/write_protocols/line_protocol_tutorial/
Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.
Added by @njhartwell in #5440.
OpenTSDB emitter
Adds support for emitting Druid metrics to OpenTSDB.
Note: This is a community-contributed extension and is not automatically included in the Druid distribution. We welcome feedback for deciding when to promote this to a core extension. For more information, see Community Extensions.
Added by @QiuMM in #5380.
Updating from 0.12.3 and earlier
Please see below for changes between 0.12.3 and 0.13.0 that you should be aware of before upgrading. If you're updating from an earlier version than 0.12.3, please see release notes of the relevant intermediate versions for additional notes.
MySQL metadata storage extension no longer includes JDBC driver
The MySQL metadata storage extension is now packaged together with the Druid distribution but without the required MySQL JDBC driver (due to licensing restrictions). To use this extension, the driver will need to be downloaded separately and added to the extension directory.
See http://druid.io/docs/latest/development/extensions-core/mysql.html for more details.
AWS region configuration required for S3 extension
As a result of switching from jets3t to the AWS SDK (#5382), users of the S3 extension are now required to explicitly set the target region. This can be done by setting the JVM system property
aws.region
or the environment variableAWS_REGION
.As an example, to set the region to 'us-east-1' through system properties:
-Daws.region=us-east-1
to the jvm.config file for all Druid services-Daws.region=us-east-1
todruid.indexer.runner.javaOpts
in middleManager/runtime.properties so that the property will be passed to peon (worker) processesIngestion spec changes
As a result of renaming packaging from
io.druid
toorg.apache.druid
, ingestion specs that reference classes by their fully-qualified class name will need to be modified accordingly.As an example, if you are using the Parquet extension with Hadoop indexing, the
inputFormat
field of theinputSpec
will need to change fromio.druid.data.input.parquet.DruidParquetInputFormat
toorg.apache.druid.data.input.parquet.DruidParquetInputFormat
.Metrics changes
New metrics
task/action/log/time
- Milliseconds taken to log a task action to the audit log (#5714)task/action/run/time
- Milliseconds taken to execute a task action (#5714)query/node/backpressure
- Nanoseconds the channel is unreadable due to backpressure being applied (#6335) (Note that this is not enabled by default and requires a custom implementation ofQueryMetrics
to emit)New dimensions
taskId
andtaskType
added to task-related metrics (#5664)Other
HttpPostEmitterMonitor
no longer emits maxTime and minTime if no times were recorded (#6418)Rollback restrictions
64-bit doubles aggregators
64-bit doubles aggregators are now used by default (see #5478). Support for 64-bit floating point columns was release in Druid 0.11.0, so if this is enabled, versions older than 0.11.0 will not be able to read the data segments.
To disable and keep the old format, you will need to set the system-wide property
druid.indexing.doubleStorage=float
.Disabling bitmap indexes
0.13.0 adds support for disabling bitmap indexes on a per-column basis, which can save space in cases where bitmap indexes add no value. This is done by setting the 'createBitmapIndex' field in the dimension schema. Segments written with this option will not be backwards compatible with older versions of Druid (#5402).
utf8mb4 is now the recommended metadata storage charset
For upgrade instructions, use the
ALTER DATABASE
andALTER TABLE
instructions as described here: https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html.For motivation and reference, see #5377 and #5411.
Removed configuration properties
druid.indexer.runner.tlsStartPort
has been removed (#6194).druid.indexer.runner.separateIngestionEndpoint
has been removed (#6263).Behavior changes
ParseSpec is now a required field in ingestion specs
There is no longer a default ParseSpec (previously the DelimitedParseSpec). Ingestion specs now require
parseSpec
to be specified. If you previously did not provide aparseSpec
, you should use one with"format": "tsv"
to maintain the existing behavior (#6310).Change to default maximum rows to return in one JDBC frame
The default value for
druid.sql.avatica.maxRowsPerFrame
was reduced from 100k to 5k to minimize out of memory errors (#5409).Router behavior change when routing to brokers dedicated to different time ranges
As a result of #5595, routers may now select an undesired broker in configurations where there are different tiers of brokers that are intended to be dedicated to queries on different time ranges. See #1362 for discussion.
Ruby TimestampSpec no longer ignores milliseconds
Timestamps parsed using a TimestampSpec with format 'ruby' no longer truncates the millisecond component. If you were using this parser and wanted a query granularity of SECOND, ensure that it is configured appropriately in your indexing specs (#6217).
Small increase in size of ZooKeeper task announcements
The datasource name was added to
TaskAnnouncement
which will result in a small per task increase in the amount of data stored in ZooKeeper (#5511).Addition of 'name' field to filtered aggregators
Aggregators of type 'filtered' now support a 'name' field. Previously, the filtered aggregator inherited the name of the aggregator it wrapped. If you have provided the 'name' field for both the filtered aggregator and the wrapped aggregator, it will prefer the name of the filtered aggregator. It will use the name of the wrapped aggregator if the name of the filtered aggregator is missing or empty (#6219).
Interface changes for extension developers
Packaging has been renamed from
io.druid
toorg.apache.druid
. All third-party extensions will need to rename their META-INF/io.druid.initialization.DruidModule
toorg.apache.druid.initialization.DruidModule
and update their extension's packaging appropriately (#6266).The
DataSegmentPuller
interface has been removed (#5461).A number of functions under
java-util
have been removed (#5461).The constructor of the
Metadata
class has changed (#5613).The 'spark2' Maven profile has been removed (#5581).
API deprecations
Overlord
/druid/indexer/v1/supervisor/{id}/shutdown
endpoint has been deprecated in favor of/druid/indexer/v1/supervisor/{id}/terminate
(#6272 and #6234)./druid/indexer/v1/task/{taskId}/segments
endpoint has been deprecated (#6368).status
field returned by/druid/indexer/v1/task/ID/status
has been deprecated in favor ofstatusCode
(#6334).reportParseExceptions
andignoreInvalidRows
parameters for ingestion tuning configurations have been deprecated in favor oflogParseExceptions
andmaxParseExceptions
(#5418).Broker
/druid/v2/datasources/{dataSourceName}/dimensions
endpoint has been deprecated. A segment metadata query or the INFORMATION_SCHEMA SQL table should be used instead (#6361)./druid/v2/datasources/{dataSourceName}/metrics
endpoint has been deprecated. A segment metadata query or the INFORMATION_SCHEMA SQL table should be used instead (#6361).Credits
Thanks to everyone who contributed to this release!
@a2l007 @adursun @ak08 @akashdw @aleksi75 @AlexanderSaydakov @alperkokmen @amalakar @andresgomezfrr @apollotonkosmo @asdf2014 @awelsh93 @b-slim @bolkedebruin @Caroline1000 @chengchengpei @clintropolis @dansuzuki @dclim @DiegoEliasCosta @dragonls @drcrallen @dyanarose @dyf6372 @Dylan1312 @erikdubbelboer @es1220 @evasomething @fjy @Fokko @gaodayue @gianm @hate13 @himanshug @hoesler @jaihind213 @jcollado @jihoonson @jim-slattery-rs @jkukul @jon-wei @josephglanville @jsun98 @kaijianding @KenjiTakahashi @kevinconaway @korvit0 @leventov @lssenthilkumar @mhshimul @niketh @NirajaM @nishantmonu51 @njhartwell @palanieppan-m @pdeva @pjain1 @QiuMM @redlion99 @rpless @samarthjain @Scorpil @scrawfor @shiroari @shivtools @siddharths @SpyderRivera @spyk @stuartmclean @surekhasaharan @susielu @varaga @vineshcpaul @vvc11 @wysstartgo @xvrl @yunwan @yuppie-flu @yurmix @zhangxinyu1 @zhztheplayer