projectnessie / nessie

Nessie: Transactional Catalog for Data Lakes with Git-like semantics
https://projectnessie.org
Apache License 2.0
1.04k stars 129 forks source link

Update dependency org.rocksdb:rocksdbjni to v9.6.1 #9545

Closed renovate[bot] closed 2 months ago

renovate[bot] commented 2 months ago

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
org.rocksdb:rocksdbjni (source) 9.5.2 -> 9.6.1 age adoption passing confidence

[!WARNING] Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

facebook/rocksdb (org.rocksdb:rocksdbjni) ### [`v9.6.1`](https://redirect.github.com/facebook/rocksdb/releases/tag/v9.6.1): RocksDB 9.6.1 #### 9.6.1 (2024-08-24) ##### Bug Fixes - Fix correctness of MultiGet across column families with user timestamp. #### 9.6.0 (2024-08-19) ##### New Features - \*Best efforts recovery supports recovering to incomplete Version with a clean seqno cut that presents a valid point in time view from the user's perspective, if versioning history doesn't include atomic flush. - New option `BlockBasedTableOptions::decouple_partitioned_filters` should improve efficiency in serving read queries because filter and index partitions can consistently target the configured `metadata_block_size`. This option is currently opt-in. - Introduce a new mutable CF option `paranoid_memory_checks`. It enables additional validation on data integrity during reads/scanning. Currently, skip list based memtable will validate key ordering during look up and scans. ##### Public API Changes - Add ticker stats to count file read retries due to checksum mismatch - Adds optional installation callback function for remote compaction ##### Behavior Changes - There may be less intra-L0 compaction triggered by total L0 size being too small. We now use compensated file size (tombstones are assigned some value size) when calculating L0 size and reduce the threshold for L0 size limit. This is to avoid accumulating too much data/tombstones in L0. ##### Bug Fixes - \*Make DestroyDB supports slow deletion when it's configured in `SstFileManager`. The slow deletion is subject to the configured `rate_bytes_per_sec`, but not subject to the `max_trash_db_ratio`. - Fixed a bug where we set unprep_seqs\_ even when WriteImpl() fails. This was caught by stress test write fault injection in WriteImpl(). This may have incorrectly caused iteration creation failure for unvalidated writes or returned wrong result for WriteUnpreparedTxn::GetUnpreparedSequenceNumbers(). - Fixed a bug where successful write right after error recovery for last failed write finishes causes duplicate WAL entries - Fixed a data race involving the background error status in `unordered_write` mode. - \*Fix a bug where file snapshot functions like backup, checkpoint may attempt to copy a non-existing manifest file. [#​12882](https://redirect.github.com/facebook/rocksdb/issues/12882) - Fix a bug where per kv checksum corruption may be ignored in MultiGet(). - Fix a race condition in pessimistic transactions that could allow multiple transactions with the same name to be registered simultaneously, resulting in a crash or other unpredictable behavior.

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.



This PR was generated by Mend Renovate. View the repository job log.