SETL-Framework / setl

A simple Spark-powered ETL framework that just works 🍺
Apache License 2.0
178 stars 32 forks source link

chore(deps): bump delta-core_2.12 from 1.1.0 to 2.1.1 #278

Closed dependabot[bot] closed 1 year ago

dependabot[bot] commented 2 years ago

Bumps delta-core_2.12 from 1.1.0 to 2.1.1.

Release notes

Sourced from delta-core_2.12's releases.

Delta Lake 2.1.0

We are excited to announce the release of Delta Lake 2.1.0 on Apache Spark 3.3. Similar to Apache Spark™, we have released Maven artifacts for both Scala 2.12 and Scala 2.13.

The key features in this release are as follows

  • Support for Apache Spark 3.3.
  • Support for [TIMESTAMP | VERSION] AS OF in SQL. With Spark 3.3, Delta now supports time travel in SQL to query older data easily. With this update, time travel is now available both in SQL and through the DataFrame API.
  • Support for Trigger.AvailableNow when streaming from a Delta table. Spark 3.3 introduces Trigger.AvailableNow for running streaming queries like Trigger.Once in multiple batches. This is now supported when using Delta tables as a streaming source.
  • Support for SHOW COLUMNS to return the list of columns in a table.
  • Support for DESCRIBE DETAIL in the Scala and Python DeltaTable API. Retrieve detailed information about a Delta table using the DeltaTable API and in SQL.
  • Support for returning operation metrics from SQL Delete, Merge, and Update commands. Previously these SQL commands returned an empty DataFrame, now they return a DataFrame with useful metrics about the operation performed.
  • Optimize performance improvements
    • Added a config to use repartition(1) instead of coalesce(1) in Optimize for better performance when compacting many small files.
    • Improve Optimize performance by using a queue-based approach to parallelize the compaction jobs.
  • Other notable changes
    • Support for using variables in the VACUUM and OPTIMIZE SQL commands.
    • Improvements for CONVERT TO DELTA with catalog tables.
      • Autofill the partition schema from the catalog when it’s not provided.
      • Use partition information from the catalog to find the data files to commit instead of doing a full directory scan. Instead of committing all data files in the table directory, only data files under the directories of active partitions will be committed.
    • Support for Change Data Feed (CDF) batch reads on column mapping enabled tables when DROP COLUMN and RENAME COLUMN have not been used. See the documentation for more details.
    • Improve Update performance by enabling schema pruning in the first pass.
    • Fix for DeltaTableBuilder to preserve table property case of non-delta properties when setting properties.
    • Fix for duplicate CDF row output for delete-when-matched merges with multiple matches.
    • Fix for consistent timestamps in a MERGE command.
    • Fix for incorrect operation metrics for DataFrame writes with a replaceWhere option.
    • Fix for a bug in Merge that sometimes caused empty files to be committed to the table.
    • Change in log4j properties file format. Apache Spark upgraded the log4j version from 1.x to 2.x which has a different format for the log4j file. Refer to the Spark upgrade notes.

Benchmark framework update

Improvements to the benchmark framework (initial version added in version 1.2.0) including support for benchmarking arbitrary functions and not just SQL queries. We’ve also added Terraform scripts to automatically generate the infrastructure to run benchmarks on AWS and GCP.

Credits

Adam Binford, Allison Portis, Andreas Chatzistergiou, Andrew Vine, Andy Lam, Carlos Peña, Chang Yong Lik, Christos Stavrakakis, David Lewis, Denis Krivenko, Denny Lee, EJ Song, Edmondo Porcu, Felipe Pessoto, Fred Liu, Fu Chen, Grzegorz Kołakowski, Hedi Bejaoui, Hussein Nagree, Ionut Boicu, Ivan Sadikov, Jackie Zhang, Jiawei Bao, Jintao Shen, Jintian Liang, Jonas Irgens Kylling, Juliusz Sompolski, Junlin Zeng, KaiFei Yi, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Lin Zhou, Lukas Rupprecht, Max Gekk, Min Yang, Ming DAI, Nick, Ole Sasse, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Rui Wang, Ryan Johnson, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Tathagata Das, Terry Kim, Thomas Newton, Tom van Bussel, Tyson Condie, Venki Korukanti, Vini Jaiswal, Will Jones, Xi Liang, Yijia Cui, Yousry Mohamed, Zach Schuermann, sherlockbeard, yikf

Delta Lake 2.0.0

We are excited to announce the release of Delta Lake 2.0.0 on Apache Spark 3.2.

... (truncated)

Commits
  • d8c4fc1 Setting version to 2.1.1
  • eae7e63 Upgrade version in integration tests
  • 41a1dbc Misc integration test updates
  • 4ec7631 Issue #1436: Fix restore delta table NotSerializableException for Hadoop 2
  • 58f539f Fix S3DynamoDBLogStore concurrent writer bug
  • d7845e6 Allow schema pruning for delete first pass
  • 26df795 Fix bug on merge command when DELTA_COLLECT_STATS is disabled
  • 34b52b9 Fix Delta streaming source filter logic to not return incorrect -1 index
  • 82ddcf1 Fix Delta source initialization issue when using AvailableNow
  • 8570049 Prevent Protocol Downgrades during RESTORE in Delta
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
codecov[bot] commented 2 years ago

Codecov Report

Merging #278 (bd215ba) into master (a170c57) will increase coverage by 0.04%. The diff coverage is n/a.

@@            Coverage Diff             @@
##           master     #278      +/-   ##
==========================================
+ Coverage   97.92%   97.97%   +0.04%     
==========================================
  Files          63       63              
  Lines        2027     2027              
  Branches      125      125              
==========================================
+ Hits         1985     1986       +1     
+ Misses         42       41       -1     
Flag Coverage Δ
master_2.11_2.4 ?
master_2.12_3.2 ?
pr_2.11_2.3 91.95% <ø> (?)
pr_2.11_2.4 97.63% <ø> (?)
pr_2.12_2.4 97.61% <ø> (?)
pr_2.12_3.0 97.76% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...o/github/setl/storage/SparkRepositoryBuilder.scala 98.30% <0.00%> (+1.69%) :arrow_up:

:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more

dependabot[bot] commented 1 year ago

Superseded by #284.