Open nvanbenschoten opened 4 years ago
hi. need this for Keycloak working and may be Liquibase
@nvanbenschoten do you have thoughts on the likelihood for this in 21.1. and then as an add-on, what are the chances of getting ranged locks?
I'd defer that to @sumeerbhola. This change doesn't seem particularly involved, though we wouldn't want to make the change until some of the larger lock table changes land in this release. Ranged locks are a much larger ask, but are being considered in the design of https://github.com/cockroachdb/cockroach/pull/55664.
@ajwerner Replicated range locks have performance implications, since one can't use storage engine bloom filters when looking for conflicting range locks in the lock table. I was speculating on https://github.com/cockroachdb/cockroach/pull/55664 that we could make the leaseholder maintain an in-memory summary to prevent a performance drop, though a new leaseholder will need to fallback to those reads, until it initializes its in-memory state by reading the range locks from storage (which is not ideal). Can you make do with unreplicated range locks?
Unreplicated range locks would be great. I’ll take what I can get. Replicated locks would enable us to avoid refreshing reads but unreplicated locks are a lot better than nothing.
Unreplicated range locks would be great.
What strength. Can you open a separate issue?
Unreplicated range locks would be great.
What strength. Can you open a separate issue?
Cross-posting a snippet of a thread with @sumeerbhola to ensure we don't miss this when we return to this issue:
Regarding an upgrade locking strength, I do still think there's room for increased concurrency between locking SFU reads and non-locking reads. However, we'll want to be guided by benchmarks here, mainly YCSB. I suspect that in doing so, we will find that we need to couple the upgrade locking strength with a second optimization - a way for implicit, single-row UPDATE statements to perform a server-side refresh during their {Put, EndTxn} batch. The idea here would be that if a transaction is going to write to all of the keys that it read from and ends up writing entirely to a single range, it should be able to implicitly refresh away bumps to its write timestamp due to read-write conflicts (but not write-write conflicts!). We already have some notion of server-side refreshes (see CanForwardReadTimestamp), but we'll need to extend that concept. Without this, I fear that the increased concurrency due to the weaker SFU locking strength would cause transaction retries that would negate any benefit provided by the increased concurrency.
The server-side refresh mechanism I'm referring to here is implemented in canDoServersideRetry
.
Regarding how this locking strength would map to SELECT FOR UPDATE
and SELECT FOR SHARE
statements:
It might be interesting to expose this level of locking strength through SQL to applications (though I'm not sure what we'd call it, since PG already uses "FOR UPDATE" to mean exclusive locking). It could be useful when applications are not sure at the time of the locking SELECT
whether any of the rows read will be modified later in the transaction. (FOR UPDATE
would then mean the application is sure at least some of the rows read will be modified, and FOR SHARE
would continue to mean the application is sure none of the rows read will be modified.)
(Or, alternatively, we could always use this level of locking strength for FOR UPDATE
. But then what would exclusive locks be for?)
Removing the O-support label here. This was from a time when we thought we'd need to switch SELECT FOR UPDATE
to acquire Update/Upgrade
locks instead of Exclusive
locks so that they don't conflict with non-locking readers. Our thinking has evolved since, and we expect non-locking readers to not conflict with Exclusive
locks. See https://github.com/cockroachdb/cockroach/issues/117627 -- I'll apply O-support to that one instead.
The
lockTable
currently only supports the exclusive lock strength, but was designed to support varying degrees of lock strengths. Specifically, we'd eventually like to add support forShared
andUpgrade
locks as well.https://github.com/cockroachdb/cockroach/blob/9e1666be01bd45358741c776bcc7137b56c7ce87/pkg/kv/kvserver/concurrency/lock/locking.proto#L40-L50
We currently have no need for
Shared
locks, butUpgrade
locks would be immediately useful for unreplicated locks acquired during the initial row-fetch of anUPDATE
statement (implicit SFU) or by aSELECT ... FOR UPDATE
statement (explicit SFU).https://github.com/cockroachdb/cockroach/blob/9e1666be01bd45358741c776bcc7137b56c7ce87/pkg/kv/kvserver/concurrency/lock/locking.proto#L90-L130
The major benefit of this form of lock is that it does not conflict with non-locking reads. This avoids blocking these reads, at the expense of undermining some of the protection against transactions retries that is provided by SFU. This is because these non-locking reads are allowed to slip underneath Upgrade locks and bump the timestamp cache, which may force the UPDATE to have to push its timestamp when it returns to write an intent. This is probably ok, and will certainly be once we also support
Shared
locks andSELECT ... FOR SHARE
so that users can have full control over row-level locking.This should improve throughput on YCSB-B (95% reads, 5% updates). It's unclear how it will affect YCSB-A (50% reads, 50% updates).
Jira issue: CRDB-4209