maxisoft / Cryptodd

Save usefull crypto data into databases
GNU Affero General Public License v3.0
2 stars 1 forks source link

Bump BitFaster.Caching from 2.2.0 to 2.3.2 #318

Closed dependabot[bot] closed 11 months ago

dependabot[bot] commented 11 months ago

Bumps BitFaster.Caching from 2.2.0 to 2.3.2.

Release notes

Sourced from BitFaster.Caching's releases.

v2.3.2

What's changed

  • Fix ConcurrentLru NullReferenceException when expiring and disposing null values (i.e. the cached value is a reference type, and the caller cached a null value).
  • Fix ConcurrentLfu handling of updates to detached nodes, caused by concurrent reads and writes. Detached nodes could be re-attached to the probation LRU pushing out fresh items prematurely, but would eventually expire since they can no longer be accessed.

Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.3.1...v2.3.2

v2.3.1

What's changed

  • Introduce a simple heuristic to estimate the optimal ConcurrentDictionary bucket count for ConcurrentLru/ConcurrentLfu/ClassicLru based on the capacity constructor arg. When the cache is at capacity, the ConcurrentDictionary will have a prime number bucket count and a load factor of 0.75.
    • When capacity is less than 150 elements, start with a ConcurrentDictionary capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing.
    • For larger caches, pick ConcurrentDictionary initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4 ConcurrentDictionary grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
  • SingletonCache sets the internal ConcurrentDictionary capacity to the next prime number greater than the capacity constructor argument.
  • Fix ABA concurrency bug in Scoped by changing ReferenceCount to use reference equality (via object.ReferenceEquals).
  • .NET6 target now compiled with SkipLocalsInit. Minor performance gains.
  • Simplified AtomicFactory/AsyncAtomicFactory/ScopedAtomicFactory/ScopedAsyncAtomicFactory by removing redundant reads, reducing code size.
  • ConcurrentLfu.Count now does not lock the underlying ConcurrentDictionary, matching ConcurrentLru.Count.
  • Use CollectionsMarshal.AsSpan to enumerate candidates within ConcurrentLfu.Trim on .NET6.

Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.3.0...v2.3.1

v2.3.0

What's changed

  • Align TryRemove overloads with ConcurrentDictionary for ICache (including WithAtomicGetOrAdd). This adds two new overloads:
    • bool TryRemove(K key, out V value) - enables getting the value that was removed.
    • bool TryRemove(KeyValuePair<K, V> item) - enables removing an item only when the key and value are the same.
  • Fix ConcurrentLfu.Clear() to remove all values when using BackgroundThreadScheduler. Previously values may be left behind after clear was called due to removed items present in window/protected/probation polluting the list of candidates to remove.
  • Fix ConcurrentLru.Clear() to reset the isWarm flag. Now cache warmup behaves the same for a new instance of ConcurrentLru vs an existing instance that was full then cleared. Previously ConcurrentLru could have reduced capacity during warmup after calling clear, depending on the access pattern.
  • Add extension methods to make it more convenient to use AtomicFactory with a plain ConcurrentDictionary. This is similar to storing a Lazy<T> instead of T, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd.

Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.2.1...v2.3.0

v2.2.1

What's changed

  • Fix a ConcurrentLru bug where a repeated pattern of sequential key access could lead to unbounded growth.
  • Use Span APIs within MpscBoundedBuffer/StripedMpscBuffer/ConcurrentLfu on .NET6/.NETCore3.1 build targets. Reduces ConcurrentLfu lookup latency by about 5-7% in the lookup benchmark.

Full changelog: https://github.com/bitfaster/BitFaster.Caching/compare/v2.2.0...v2.2.1

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
dependabot[bot] commented 11 months ago

Superseded by #322.