apache / druid

Apache Druid: a high performance real-time analytics database.
https://druid.apache.org/
Apache License 2.0
13.41k stars 3.69k forks source link

[DRAFT] 28.0.0 release notes #15326

Closed LakshSingla closed 10 months ago

LakshSingla commented 10 months ago

Apache Druid 28.0.0 contains over 420 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 57 contributors.

See the complete set of changes for additional details, including bug fixes.

Review the upgrade notes and incompatible changes before you upgrade to Druid 28.0.0.

# Important features, changes, and deprecations

In Druid 28.0.0, we have made substantial improvements to querying to make the system more ANSI SQL compatible. This includes changes in handling NULL and boolean values as well as boolean logic. At the same time, the Apache Calcite library has been upgraded to the latest version. While we have documented known query behavior changes, please read the upgrade notes section carefully. Test your application before rolling out to broad production scenarios while closely monitoring the query status.

# SQL compatibility

Druid continues to make SQL query execution more consistent with how standard SQL behaves. However, there are feature flags available to restore the old behavior if needed.

# Three-valued logic

Druid native filters now observe SQL three-valued logic (true, false, or unknown) instead of Druid's classic two-state logic by default, when the following default settings apply:

#15058

# Strict booleans

druid.expressions.useStrictBooleans is now enabled by default. Druid now handles booleans strictly using 1 (true) or 0 (false). Previously, true and false could be represented either as true and false as well as 1 and 0, respectively. In addition, Druid now returns a null value for Boolean comparisons like True && NULL.

If you don't explicitly configure this property in runtime.properties, clusters now use LONG types for any ingested boolean values and in the output of boolean functions for transformations and query time operations. For more information, see SQL compatibility in the upgrade notes.

#14734

# NULL handling

druid.generic.useDefaultValueForNull is now disabled by default. Druid now differentiates between empty records and null records. Previously, Druid might treat empty records as empty or null. For more information, see SQL compatibility in the upgrade notes.

#14792

# SQL planner improvements

Druid uses Apache Calcite for SQL planning and optimization. Starting in Druid 28.0.0, the Calcite version has been upgraded from 1.21 to 1.35. This upgrade brings in many bug fixes in SQL planning from Calcite.

# Dynamic parameters

As part of the Calcite upgrade, the behavior of type inference for dynamic parameters has changed. To avoid any type interference issues, explicitly CAST all dynamic parameters as a specific data type in SQL queries. For example, use:

SELECT (1 * CAST (? as DOUBLE))/2 as tmp

Do not use:

SELECT (1 * ?)/2 as tmp

# Async query and query from deep storage

Query from deep storage is no longer an experimental feature. When you query from deep storage, more data is available for queries without having to scale your Historical services to accommodate more data. To benefit from the space saving that query from deep storage offers, configure your load rules to unload data from your Historical services.

# Support for multiple result formats

Query from deep storage now supports multiple result formats. Previously, the /druid/v2/sql/statements/ endpoint only supported results in the object format. Now, results can be written in any format specified in the resultFormat parameter. For more information on result parameters supported by the Druid SQL API, see Responses.

#14571

# Broadened access for queries from deep storage

Users with the STATE permission can interact with status APIs for queries from deep storage. Previously, only the user who submitted the query could use those APIs. This enables the web console to monitor the running status of the queries. Users with the STATE permission can access the query results.

#14944

# MSQ queries for realtime tasks

The MSQ task engine can now include real time segments in query results. To do this, use the includeSegmentSource context parameter and set it to REALTIME.

#15024

# MSQ support for UNION ALL queries

You can now use the MSQ task engine to run UNION ALL queries with UnionDataSource.

#14981

# Ingest from multiple Kafka topics to a single datasource

You can now ingest streaming data from multiple Kafka topics to a datasource using a single supervisor. You configure the topics for the supervisor spec using a regex pattern as the value for topicPattern in the IO config. If you add new topics to Kafka that match the regex, Druid automatically starts ingesting from those new topics.

If you enable multi-topic ingestion for a datasource, downgrading will cause the Supervisor to fail. For more information, see Stop supervisors that ingest from multiple Kafka topics before downgrading.

#14424 #14865

# SQL UNNEST and ingestion flattening

The UNNEST function is no longer experimental.

Druid now supports UNNEST in SQL-based batch ingestion and query from deep storage, so you can flatten arrays easily. For more information, see UNNEST and Unnest arrays within a column.

You no longer need to include the context parameter enableUnnest: true to use UNNEST.

#14886

# Recommended syntax for SQL UNNEST

The recommended syntax for SQL UNNEST has changed. We recommend using CROSS JOIN instead of commas for most queries to prevent issues with precedence. For example, use:

SELECT column_alias_name1 FROM datasource CROSS JOIN UNNEST(source_expression1) AS table_alias_name1(column_alias_name1) CROSS JOIN UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

Do not use:

SELECT column_alias_name FROM datasource, UNNEST(source_expression1) AS table_alias_name1(column_alias_name1), UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

# Window functions (experimental)

You can use window functions in Apache Druid to produce values based upon the relationship of one row within a window of rows to the other rows within the same window. A window is a group of related rows within a result set. For example, rows with the same value for a specific dimension.

Enable window functions in your query with the enableWindowing: true context parameter.

#15184

# Concurrent append and replace (experimental)

Druid 28.0.0 adds experimental support for concurrent append and replace. This feature allows you to safely replace the existing data in an interval of a datasource while new data is being appended to that interval. One of the most common applications of this is appending new data to an interval while compaction of that interval is already in progress. For more information, see Concurrent append and replace.

Segment locking will be deprecated and removed in favor of concurrent append and replace that is much simpler in design. With concurrent append and replace, Druid doesn't lock compaction jobs out because of active realtime ingestion.

# Task locks for append and replace batch ingestion jobs

Append batch ingestion jobs can now share locks. This allows you to run multiple append batch ingestion jobs against the same time internal. Replace batch ingestion jobs still require an exclusive lock. This means you can run multiple append batch ingestion jobs and one replace batch ingestion job for a given interval.

#14407

# Streaming ingestion with concurrent replace

Streaming jobs reading from Kafka and Kinesis with APPEND locks can now ingest concurrently with compaction running with REPLACE locks. The segment granularity of the streaming job must be equal to or finer than that of the concurrent replace job.

#15039

# Functional area and related changes

This section contains detailed release notes separated by areas.

# Web console

# Added UI support for segment loading query context parameter

The web console supports the waitUntilSegmentsLoad query context parameter.

UI for waitUntilSegmentsLoad context parameter

#15110

# Added concurrent append and replace switches

The web console includes concurrent append and replace switches.

The following screenshot shows the concurrent append and replace switches in the classic batch ingestion wizard:

Classic batch ingestion wizard

The following screenshot shows the concurrent append and replace switches in the compaction configuration UI:

Compaction configuration UI

#15114

# Added UI support for ingesting from multiple Kafka topics to a single datasource

The web console supports ingesting streaming data from multiple Kafka topics to a datasource using a single supervisor.

UI for Kafka multi-topic ingestion

#14833

# Other web console improvements

# Ingestion

# JSON and auto column indexer

The json column type is now equivalent to using auto in JSON-based batch ingestion dimension specs. Upgrade your ingestion specs to json to take advantage of the features and functionality of auto, including the following:

json type columns created with Druid 28.0.0 are not backwards compatible with Druid versions older than 26.0.0. If you upgrade from one of these versions, you can continue to write nested columns in a backwards compatible format (version 4).

For more information, see Nested column format in the upgrade notes.

#14955 #14456

# Ingestion status

Ingestion reports now include a segmentLoadStatus object that provides information related to the ingestion, such as duration and total segments.

#14322

# SQL-based ingestion

# Ability to ingest ARRAY types

SQL-based ingestion now supports storing ARRAY typed values in ARRAY typed columns as well as storing both VARCHAR and numeric typed arrays. Previously, the MSQ task engine stored ARRAY typed values as multi-value dimensions instead of ARRAY typed columns.

The MSQ task engine now includes the arrayIngestMode query context parameter, which controls how ARRAY types are stored in Druid segments. Set the arrayIngestMode query context parameter to array to ingest ARRAY types.

In Druid 28.0.0, the default mode for arrayIngestMode is mvd for backwards compatibility, which only supports VARCHAR typed arrays and stores them as multi-value dimensions. This default is subject to change in future releases.

For information on how to migrate to the new behavior, see the Ingestion options for ARRAY typed columns in the upgrade notes. For information on inserting, filtering, and grouping behavior for ARRAY typed columns, see Array columns.

#15093

# Numeric array type support

Row-based frames and, by extension, the MSQ task engine now support numeric array types. This means that all queries consuming or producing arrays work with the MSQ task engine. Numeric arrays can also be ingested using SQL-based ingestion with MSQ. For example, queries like SELECT [1, 2] are valid now since they consume a numeric array instead of failing with an unsupported column type exception.

#14900

# Azure Blob Storage support

Added support for Microsoft Azure Blob Storage. You can now use fault tolerance and durable storage with Microsoft Azure Blob Storage. For more information, see Durable storage.

#14660

# Other SQL-based ingestion improvements

# Streaming ingestion

# Ability to reset offsets for a supervisor

Added a new API endpoint /druid/indexer/v1/supervisor/:supervisorId/resetOffsets to reset specific partition offsets for a supervisor without resetting the entire set. This endpoint clears only the specified offsets in Kafka or sequence numbers in Kinesis, prompting the supervisor to resume data reading.

#14772

# Other streaming ingestion improvements

# Querying

# Improved LOOKUP function

The LOOKUP function now accepts an optional constant string as a third argument. This string is used to replace missing values in results. For example, the query LOOKUP(store, 'store_to_country', 'NA'), returns NA if the store_to_country value is missing for a given store.

#14956

# AVG function

The AVG aggregation function now returns a double instead of a long.

#15089

# Improvements to EARLIEST and LATEST operators

Improved EARLIEST and LATEST operators as follows:

# Functions for evaluating distinctness

New SQL and native query functions allow you to evaluate whether two expressions are distinct or not distinct. Expressions are distinct if they have different values or if one of them is NULL. Expressions are not distinct if their values are the same or if both of them are NULL.

Because the functions treat NULLs as known values when used as a comparison operator, they always return true or false even if one or both expressions are NULL.

The following table shows the difference in behavior between the equals sign (=) and IS [NOT] DISTINCT FROM:

A B A=B A IS NOT DISTINCT FROM B
0 0 true true
0 1 false false
0 null unknown false
null null unknown true

#14976

# Functions for evaluating equalities

New SQL and native query functions allow you to evaluate whether a condition is true or false. These functions are different from x == true and x != true in that they never return null even when the variable is null.

SQL function Native function
IS_TRUE istrue()
IS_FALSE isfalse()
IS_NOT_TRUE nottrue()
IS_NOT_FALSE notfalse()

#14977

# Function to decode Base64-encoded strings

The new SQL and native query function, decode_base64_utf8 decodes a Base64-encoded string and returns the UTF-8-encoded string. For example, decode_base64_utf8('aGVsbG8=').

#14943

# Improved subquery guardrail

You can now set the maxSubqueryBytes guardrail to one of the following:

#14808

# Other query improvements

# Cluster management

# Unused segments

Druid now stops loading and moving segments as soon as they are marked as unused. This prevents Historical processes from spending time on superfluous loads of segments that will be unloaded later. You can mark segments as unused by a drop rule, overshadowing, or by calling the Data management API.

#14644

# Encrypt data in transit

The net.spy.memcached client has been replaced with the AWS ElastiCache client. This change allows Druid to encrypt data in transit using TLS.

Configure it with the following properties:

Property Description Default
druid.cache.enableTls Enable TLS based connection for Memcached client. Boolean false
druid.cache.clientMode Client Mode. Static mode requires the user to specify individual cluster nodes. Dynamic mode uses AutoDiscovery feature of AWS Memcached. String. "static" or "dynamic" static
druid.cache.skipTlsHostnameVerification Skip TLS Hostname Verification. Boolean. true

#14827

# New metadata in the Druid segments table

The Druid segments table now has a column called used_flag_last_updated (VARCHAR (255)). This column is a UTC date string corresponding to the last time that the used column was modified.

Note that this is an incompatible change to the table. For upgrade information, see Upgrade Druid segments table.

#12599

# Other cluster management improvements

# Data management

# Alert message for segment assignments

Improved alert message for segment assignments when an invalid tier is specified in a load rule or when no rule applies on a segment.

#14696

# Coordinator API for unused segments

Added includeUnused as an optional parameter to the Coordinator API. You can send a GET request to /druid/coordinator/v1/metadata/datasources/{dataSourceName}/segments/{segmentId}?includeUnused=true to retrieve the metadata for a specific segment as stored in the metadata store.

The API also returns unused segments if the includeUnused parameter is set.

#14846

# Kill task improvements

# Metrics and monitoring

# New ingestion metrics

Metric Description Dimensions Normal value
ingest/input/bytes Number of bytes read from input sources, after decompression but prior to parsing. This covers all data read, including data that does not end up being fully processed and ingested. For example, this includes data that ends up being rejected for being unparseable or filtered out. dataSource, taskId, taskType, groupId, tags Depends on the amount of data read.

#14582

# New query metrics

Metric Description Dimensions Normal value
mergeBuffer/pendingRequests Number of requests waiting to acquire a batch of buffers from the merge buffer pool. This metric is exposed through the QueryCountStatsMonitor module for the Broker.

#15025

# New ZooKeeper metrics

Metric Description Dimensions Normal value
zk/connected Indicator of connection status. 1 for connected, 0 for disconnected. Emitted once per monitor period. None 1
zk/reconnect/time Amount of time, in milliseconds, that a server was disconnected from ZooKeeper before reconnecting. Emitted on reconnection. Not emitted if connection to ZooKeeper is permanently lost, because in this case, there is no reconnection. None Not present

#14333

# New subquery metrics for the Broker

The new SubqueryCountStatsMonitor emits metrics corresponding to the subqueries and their execution.

Metric Description Dimensions Normal value
subquery/rowLimit/count Number of subqueries whose results are materialized as rows (Java objects on heap). This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/byteLimit/count Number of subqueries whose results are materialized as frames (Druid's internal byte representation of rows). This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/fallback/count Number of subqueries which cannot be materialized as frames This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/fallback/insufficientType/count Number of subqueries which cannot be materialized as frames due to insufficient type information in the row signature. This metric is only available if the SubqueryCountStatsMonitor module is included.
subquery/fallback/unknownReason/count Number of subqueries which cannot be materialized as frames due other reasons. This metric is only available if the SubqueryCountStatsMonitor module is included.
query/rowLimit/exceeded/count Number of queries whose inlined subquery results exceeded the given row limit This metric is only available if the SubqueryCountStatsMonitor module is included.
query/byteLimit/exceeded/count Number of queries whose inlined subquery results exceeded the given byte limit This metric is only available if the SubqueryCountStatsMonitor module is included.

#14808

# New Coordinator metrics

Metric Description Dimensions Normal value
killTask/availableSlot/count Number of available task slots that can be used for auto kill tasks in the auto kill run. This is the max number of task slots minus any currently running auto kill tasks. Varies
killTask/maxSlot/count Maximum number of task slots available for auto kill tasks in the auto kill run. Varies
kill/task/count Number of tasks issued in the auto kill run. Varies
kill/pendingSegments/count Number of stale pending segments deleted from the metadata store. dataSource Varies

#14782 #14951

# New compaction metrics

Metric Description Dimensions Normal value
compact/segmentAnalyzer/fetchAndProcessMillis Time taken to fetch and process segments to infer the schema for the compaction task to run. dataSource, taskId, taskType, groupId,tags Varies. A high value indicates compaction tasks will speed up from explicitly setting the data schema.

#14752

# Segment scan metrics

Added a new metric to figure out the usage of druid.processing.numThreads on the Historicals/Indexers/Peons.

Metric Description Dimensions Normal value
segment/scan/active Number of segments currently scanned. This metric also indicates how many threads from druid.processing.numThreads are currently being used. Close to druid.processing.numThreads

# New Kafka consumer metrics

Added the following Kafka consumer metrics:

#14582

# service/heartbeat metrics

# Tombstone and segment counts

Added ingest/tombstones/count and ingest/segments/count metrics in MSQ to report the number of tombstones and segments after Druid finishes publishing segments.

#14980

# Extensions

# Ingestion task payloads for Kubernetes

You can now provide compressed task payloads larger than 128 KB when you run MiddleManager-less ingestion jobs.

#14887

# Prometheus emitter

The Prometheus emitter now supports a new optional configuration parameter, druid.emitter.prometheus.extraLabels. This addition offers the flexibility to add arbitrary extra labels to Prometheus metrics, providing more granular control in managing and identifying data across multiple Druid clusters or other dimensions. For more information, see Prometheus emitter extension.

#14728

# Documentation improvements

We've moved Jupyter notebooks that guide you through query, ingestion, and data management with Apache Druid to the new Learn Druid repository. The repository also contains a Docker Compose file to get you up and running with a learning lab.

#15136

LakshSingla commented 10 months ago

# Upgrade notes and incompatible changes

# Upgrade notes

# Upgrade Druid segments table

Druid 28.0.0 adds a new column to the Druid metadata table that requires an update to the table.

If druid.metadata.storage.connector.createTables is set to true and the metadata store user has DDL privileges, the segments table gets automatically updated at startup to include the new used_flag_last_updated column. No additional work is needed for the upgrade.

If either of those requirements are not met, pre-upgrade steps are required. You must make these updates before you upgrade to Druid 28.0.0, or the Coordinator and Overlord processes fail.

Although you can manually alter your table to add the new used_flag_last_updated column, Druid also provides a CLI tool to do it.

#12599

In the example commands below:

# Upgrade step for MySQL

cd ${DRUID_ROOT}
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"mysql-metadata-storage\"] -Ddruid.metadata.storage.type=mysql org.apache.druid.cli.Main tools metadata-update --connectURI="<mysql-uri>" --user USER --password PASSWORD --base druid --action add-used-flag-last-updated-to-segments

# Upgrade step for PostgreSQL

cd ${DRUID_ROOT}
java -classpath "lib/*" -Dlog4j.configurationFile=conf/druid/cluster/_common/log4j2.xml -Ddruid.extensions.directory="extensions" -Ddruid.extensions.loadList=[\"postgresql-metadata-storage\"] -Ddruid.metadata.storage.type=postgresql org.apache.druid.cli.Main tools metadata-update --connectURI="<postgresql-uri>" --user  USER --password PASSWORD --base druid --action add-used-flag-last-updated-to-segments

# Manual upgrade step

ALTER TABLE druid_segments
ADD used_flag_last_updated varchar(255);

# Recommended syntax for SQL UNNEST

The recommended syntax for SQL UNNEST has changed. We recommend using CROSS JOIN instead of commas for most queries to prevent issues with precedence. For example, use:

SELECT column_alias_name1 FROM datasource CROSS JOIN UNNEST(source_expression1) AS table_alias_name1(column_alias_name1) CROSS JOIN UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

Do not use:

SELECT column_alias_name FROM datasource, UNNEST(source_expression1) AS table_alias_name1(column_alias_name1), UNNEST(source_expression2) AS table_alias_name2(column_alias_name2), ...

# Dynamic parameters

The Apache Calcite version has been upgraded from 1.21 to 1.35. As part of the Calcite upgrade, the behavior of type inference for dynamic parameters has changed. To avoid any type interference issues, explicitly CAST all dynamic parameters as a specific data type in SQL queries. For example, use:

SELECT (1 * CAST (? as DOUBLE))/2 as tmp

Do not use:

SELECT (1 * ?)/2 as tmp

# Nested column format

json type columns created with Druid 28.0.0 are not backwards compatible with Druid versions older than 26.0.0. If you are upgrading from a version prior to Druid 26.0.0 and you use json columns, upgrade to Druid 26.0.0 before you upgrade to Druid 28.0.0. Additionally, to downgrade to a version older than Druid 26.0.0, any new segments created in Druid 28.0.0 should be re-ingested using Druid 26.0.0 or 27.0.0 prior to further downgrading.

When upgrading from a previous version, you can continue to write nested columns in a backwards compatible format (version 4).

In a classic batch ingestion job, include formatVersion in the dimensions list of the dimensionsSpec property. For example:

      "dimensionsSpec": {
        "dimensions": [
          "product",
          "department",
          {
            "type": "json",
            "name": "shipTo",
            "formatVersion": 4
          }
        ]
      },

To set the default nested column version, set the desired format version in the common runtime properties. For example:

druid.indexing.formats.nestedColumnFormatVersion=4

# SQL compatibility

Starting with Druid 28.0.0, the default way Druid treats nulls and booleans has changed.

For nulls, Druid now differentiates between an empty string and a record with no data as well as between an empty numerical record and 0.
You can revert to the previous behavior by setting druid.generic.useDefaultValueForNull to true.

This property affects both storage and querying, and must be set on all Druid service types to be available at both ingestion time and query time. Reverting this setting to the old value restores the previous behavior without reingestion.

For booleans, Druid now strictly uses 1 (true) or 0 (false). Previously, true and false could be represented either as true and false as well as 1 and 0, respectively. In addition, Druid now returns a null value for boolean comparisons like True && NULL.

You can revert to the previous behavior by setting druid.expressions.useStrictBooleans to false. This property affects both storage and querying, and must be set on all Druid service types to be available at both ingestion time and query time. Reverting this setting to the old value restores the previous behavior without reingestion.

The following table illustrates some example scenarios and the impact of the changes.

Show the table | Query| Druid 27.0.0 and earlier| Druid 28.0.0 and later| |------|------------------------|----------------------| | Query empty string| Empty string (`''`) or null| Empty string (`''`)| | Query null string| Null or empty| Null| | COUNT(*)| All rows, including nulls| All rows, including nulls| | COUNT(column)| All rows excluding empty strings| All rows including empty strings but excluding nulls| | Expression 100 && 11| 11| 1| | Expression 100 || 11| 100| 1| | Null FLOAT/DOUBLE column| 0.0| Null| | Null LONG column| 0| Null| | Null `__time` column| 0, meaning 1970-01-01 00:00:00 UTC| 1970-01-01 00:00:00 UTC| | Null MVD column| `''`| Null| | ARRAY| Null| Null| | COMPLEX| none| Null|

Before upgrading to Druid 28.0.0, update your queries to account for the changed behavior as described in the following sections.

# NULL filters

If your queries use NULL in the filter condition to match both nulls and empty strings, you should add an explicit filter clause for empty strings. For example, update s IS NULL to s IS NULL OR s = ''.

# COUNT functions

COUNT(column) now counts empty strings. If you want to continue excluding empty strings from the count, replace COUNT(column) with COUNT(column) FILTER(WHERE column <> '').

# GroupBy queries

GroupBy queries on columns containing null values can now have additional entries as nulls can co-exist with empty strings.

# Stop Supervisors that ingest from multiple Kafka topics before downgrading

If you have added supervisors that ingest from multiple Kafka topics in Druid 28.0.0 or later, stop those supervisors before downgrading to a version prior to Druid 28.0.0 because the supervisors will fail in versions prior to Druid 28.0.0.

# lenientAggregatorMerge deprecated

lenientAggregatorMerge property in segment metadata queries has been deprecated. It will be removed in future releases. Use aggregatorMergeStrategy instead. aggregatorMergeStrategy also supports the latest and earliest strategies in addition to strict and lenient strategies from lenientAggregatorMerge.

#14560 #14598

# Broker parallel merge config options

The paths for druid.processing.merge.pool.* and druid.processing.merge.task.* have been flattened to use druid.processing.merge.* instead. The legacy paths for the configs are now deprecated and will be removed in a future release. Migrate your settings to use the new paths because the old paths will be ignored in the future.

#14695

# Ingestion options for ARRAY typed columns

Starting with Druid 28.0.0, the MSQ task engine can detect and ingest arrays as ARRAY typed columns when you set the query context parameter arrayIngestMode to array. The arrayIngestMode context parameter controls how ARRAY type values are stored in Druid segments.

When you set arrayIngestMode to array (recommended for SQL compliance), the MSQ task engine stores all ARRAY typed values in ARRAY typed columns and supports storing both VARCHAR and numeric typed arrays.

For backwards compatibility, arrayIngestMode defaults to mvd. When "arrayIngestMode":"mvd", Druid only supports VARCHAR typed arrays and stores them as multi-value string columns.

When you set arrayIngestMode to none, Druid throws an exception when trying to store any type of arrays.

For more information on how to ingest ARRAY typed columns with SQL-based ingestion, see SQL data types and Array columns.

# Incompatible changes

# Removed Hadoop 2

Support for Hadoop 2 has been removed. Migrate to SQL-based ingestion or JSON-based batch ingestion if you are using Hadoop 2.x for ingestion today. If migrating to Druid's built-in ingestion is not possible, you must upgrade your Hadoop infrastructure to 3.x+ before upgrading to Druid 28.0.0.

#14763

# Removed GroupBy v1

The GroupBy v1 engine has been removed. Use the GroupBy v2 engine instead, which has been the default GroupBy engine for several releases. There should be no impact on your queries.

Additionally, AggregatorFactory.getRequiredColumns has been deprecated and will be removed in a future release. If you have an extension that implements AggregatorFactory, then this method should be removed from your implementation.

#14866

# Removed Coordinator dynamic configs

The decommissioningMaxPercentOfMaxSegmentsToMove config has been removed. The use case for this config is handled by smart segment loading now, which is enabled by default.

#14923

# Removed cachingCost strategy

The cachingCost strategy for segment loading has been removed. Use cost instead, which has the same benefits as cachingCost.

If you have cachingCost set, the system ignores this setting and automatically uses cost.

#14798

# Removed InsertCannotOrderByDescending

The deprecated MSQ fault InsertCannotOrderByDescending has been removed.

#14588

# Removed the backward compatibility code for the Handoff API

The backward compatibility code for the Handoff API in CoordinatorBasedSegmentHandoffNotifier has been removed. If you are upgrading from a Druid version older than 0.14.0, upgrade to a newer version of Druid before upgrading to Druid 28.0.0.

#14652

# Developer notes

# Dependency updates

The following dependencies have had their versions bumped:

# Credits

@2bethere @317brian @a2l007 @abhishekagarwal87 @abhishekrb19 @adarshsanjeev @aho135 @AlexanderSaydakov @AmatyaAvadhanula @asdf2014 @benkrug @capistrant @clintropolis @cristian-popa @cryptoe @demo-kratia @dependabot[bot] @ektravel @findingrish @gargvishesh @georgew5656 @gianm @giuliotal @hardikbajaj @hqx871 @imply-cheddar @Jaehui-Lee @jasonk000 @jon-wei @kaisun2000 @kfaraz @kgyrtkirk @LakshSingla @lorem--ipsum @maytasm @pagrawal10 @panhongan @petermarshallio @pranavbhole @rash67 @rohangarg @SamWheating @sergioferragut @slfan1989 @somu-imply @suneet-s @techdocsmith @tejaswini-imply @TSFenwick @vogievetsky @vtlim @writer-jill @xvrl @yianni @YongGang @yuanlihan @zachjsh

xvrl commented 10 months ago

@LakshSingla can we close this now that 28 has been released?