foyer-rs / foyer

Hybrid in-memory and disk cache in Rust
https://foyer.rs
Apache License 2.0
289 stars 20 forks source link

RFC: use atomic reference count to track external in-memory cache record usage #779

Open MrCroxx opened 4 days ago

MrCroxx commented 4 days ago

Atomic Reference Count Management

[RawCache] uses an atomic reference count to management the release of an entry.

The atomic reference count represents the external references of a cache record.

When the reference count drops to zero, the related cache shard is locked to release to cache record.

It is important to guarantee the correctness of the usage of the atomic reference count. Especially when triggering the release of a record. Without any other synchronize mechanism, there would be dangling pointers or double frees:

 Thread 1: [ decrease ARC to 0 ] ============> [ release record ]
 Thread 2:                         [ increase ARC to 1 ] =======> dangling!! ==> double free!!

Thankfully, we can prevent it from happening with the usage of the shard lock:

The only ops that will increase the atomic reference count are:

  1. Insert/get/fetch the [RawCache] and get external entries. (locked)
  2. Clone an external entry. (lock-free)

    The op 1 is always guarded by a mutex/rwlock, which means it is impossible to happen while releasing a record with the shard is locked.

    The op 2 is lock-free, but only happens when there is at least 1 external reference. So, it cannot happen while releasing a record. Because when releasing is happening, there must be no external reference.

    So, this case will never happen:

    Thread 1: [ decrease ARC to 0 ] ================> [ release record ]
    Thread 2:                           [ (op2) increase ARC to 1 ] ==> dangling!! ==> double free!!

    When starting to release a record, after locking the shard, it's still required to check the atomic reference count. Because this cause still exist:

    Thread 1: [ decrease ARC to 0 ] ====================================> [ release record ]
    Thread 2:                           [ (op1) increase ARC to 1 ] =======> dangling!! ==> double free!!

    Although the op1 requires to be locked, but the release operation can be delayed after that. So the release operation can be ignored after checking the atomic reference count is not zero.

    There is no need to be afraid of leaking, there is always to be a release operation following the increasing when the atomic reference count drops to zero again.

Related issues & PRs

778

hzxa21 commented 3 days ago

The RFC LGTM.

Let me put the background of this RFC here (@MrCroxx correct me if I am wrong):

wenym1 commented 3 days ago

A drawback of using atomic add and atomic sub is that we are not aware of the value of the variable when the operation is actually applied on it. To resolve this drawback, a feasible solution can be using CAS. With CAS, we can be aware of the value before the operation is applied on the atomic variable.

A general implementation can be:

Li0k commented 3 days ago

After discussion, we found that there are several situations that may lead to entry clone:

  1. single flight , after the acquisition, each waiter will get a clone of the entry. 2. cache get , get the clone of the entry from the foyer cache.

  2. cache get, clone of entry from foyer cache

  3. clone when entry is used by upper level caller.

Therefore, clone is not always initiated by the caller (3), but can also come from 1 and 2, so I'm in favour of adopting the current RFCs to address all the scenarios involved.

Since the frequency of locks has been greatly reduced since the RFC, using CAS / Lock is acceptable to me for both.

Thanks for the efforts

MrCroxx commented 3 days ago

A drawback of using atomic add and atomic sub is that we are not aware of the value of the variable when the operation is actually applied on it. To resolve this drawback, a feasible solution can be using CAS. With CAS, we can be aware of the value before the operation is applied on the atomic variable.

A general implementation can be:

  • Acquirers ensure that that will not increment the value when the value is 0
  • Releasers decrement the value anyway. The releaser that decrement the value from 1 to 0 is responsible for releasing the object.

I think this optimization does not solve the problem. If I understand incorrectly, please correct me.

When the releaser finds itself responsible for releasing the object, it still needs to acquire the mutex lock of the shard. During the gap between the lock guard is actually acquired, there is a chance that another thread calls "get/fetch" on the same key and increases the refs again.

MrCroxx commented 1 day ago

ABA and Double Free found.

e.g.

 Thread 1: [ decrease ARC to 0 ] ==================================================================================> [ release record ] ( dangling!!  double free!! )
 Thread 2:                         [ (op2) increase ARC to 1 ] ===> [ decrease ARC to 0 ] ===> [ release record ]
MrCroxx commented 1 day ago

ABA and Double Free found.

e.g.

Thread 1: [ decrease ARC to 0 ] ==================================================================================> [ release record ] ( dangling!!  double free!! )
Thread 2:                         [ (op2) increase ARC to 1 ] ===> [ decrease ARC to 0 ] ===> [ release record ]

Let me try to solve the case with versioned CAS.

MrCroxx commented 1 day ago

@arkbriar proposed an exquisite solution. Let me organize and record it.

wenym1 commented 1 day ago

ABA and Double Free found.

e.g.

Thread 1: [ decrease ARC to 0 ] ==================================================================================> [ release record ] ( dangling!!  double free!! )
Thread 2:                         [ (op2) increase ARC to 1 ] ===> [ decrease ARC to 0 ] ===> [ release record ]

If we use CAS to increment the ref count and following the previously mentioned protocol, this won't be the problem.

* Acquirers ensure that that will not increment the value when the value is 0
* Releasers decrement the value anyway. The releaser that decrement the value from 1 to 0 is responsible for releasing the object.

In the example above, when Thread 2 try to increase ref count from 0 to 1, it should not succeed, and should do other work.

wenym1 commented 1 day ago

After rethinking about this proposal, I doubt whether this proposal is necessary.

Like summarized in the issue description above, we have two ways to increment the ref count, Insert/get/fetch and entry clone. IIUC, this proposal can only improve the performance of entry clone to reduce the time of acquiring the lock, and for Insert/get/fetch, we should acquire the lock anyway. However, if this proposal is only for improve entry clone, an easier implementation is simply wrapping the returned entry with an Arc, and then the problem of acquire lock during entry clone can be easily resolved.

Li0k commented 1 day ago

After rethinking about this proposal, I doubt whether this proposal is necessary.

Like summarized in the issue description above, we have two ways to increment the ref count, Insert/get/fetch and entry clone. IIUC, this proposal can only improve the performance of entry clone to reduce the time of acquiring the lock, and for Insert/get/fetch, we should acquire the lock anyway. However, if this proposal is only for improve entry clone, an easier implementation is simply wrapping the returned entry with an Arc, and then the problem of acquire lock during entry clone can be easily resolved.

+1, Do we have a scenario that consistently reproduces this issue? And is the cause of the performance degradation from a single key hotspot. I'd like to analyse it in a specific scenario.

MrCroxx commented 1 day ago

After rethinking about this proposal, I doubt whether this proposal is necessary.

Like summarized in the issue description above, we have two ways to increment the ref count, Insert/get/fetch and entry clone. IIUC, this proposal can only improve the performance of entry clone to reduce the time of acquiring the lock, and for Insert/get/fetch, we should acquire the lock anyway. However, if this proposal is only for improve entry clone, an easier implementation is simply wrapping the returned entry with an Arc, and then the problem of acquire lock during entry clone can be easily resolved.

This RFC is only one part of the optinizations in #778 . #778 also contains optimization that allows get operation only require the read lock if the algorithm don't need to modify the entry. This RFC is a requirement of that.

wenym1 commented 1 day ago

After rethinking about this proposal, I doubt whether this proposal is necessary. Like summarized in the issue description above, we have two ways to increment the ref count, Insert/get/fetch and entry clone. IIUC, this proposal can only improve the performance of entry clone to reduce the time of acquiring the lock, and for Insert/get/fetch, we should acquire the lock anyway. However, if this proposal is only for improve entry clone, an easier implementation is simply wrapping the returned entry with an Arc, and then the problem of acquire lock during entry clone can be easily resolved.

This RFC is only one part of the optinizations in #778 . #778 also contains optimization that allows get operation only require the read lock if the algorithm don't need to modify the entry. This RFC is a requirement of that.

I see. Do we have any issue about the general optimization in #778? The PR is quite large, and is quite reviewer-unfriendly without sufficient context about the optimization.

MrCroxx commented 1 day ago

After rethinking about this proposal, I doubt whether this proposal is necessary. Like summarized in the issue description above, we have two ways to increment the ref count, Insert/get/fetch and entry clone. IIUC, this proposal can only improve the performance of entry clone to reduce the time of acquiring the lock, and for Insert/get/fetch, we should acquire the lock anyway. However, if this proposal is only for improve entry clone, an easier implementation is simply wrapping the returned entry with an Arc, and then the problem of acquire lock during entry clone can be easily resolved.

This RFC is only one part of the optinizations in #778 . #778 also contains optimization that allows get operation only require the read lock if the algorithm don't need to modify the entry. This RFC is a requirement of that.

I see. Do we have any issue about the general optimization in #778? The PR is quite large, and is quite reviewer-unfriendly without sufficient context about the optimization.

Sorry about that. I built the new framework from scratch and it is really not easy to see diffs. I'm working on docs and will replace thr origin it with the new framework after trying the idea Ark suggested.

arkbriar commented 1 day ago

I'm working on docs and will replace thr origin it with the new framework after trying the idea Ark suggested.

xD the idea is basically the same as what @wenym1 proposed in https://github.com/foyer-rs/foyer/issues/779#issuecomment-2436808683 except its release point is in the negative (thread that performs the real free must has successfully CAS(ref_count, 0, -1) after it decreases the counter to 0) so that threads can have a chance to pick a pointer that has been released by all other threads (a.k.a., ref count = 0) up (a.k.a., ref count = 1) again. In normal cases, the two ideas are identical but when the reference count of a pointer is intensively competitive, mine will help reduce the reference failures.