LitKnd / littlekendracomments

1 stars 0 forks source link

Post: The Case of the Blocking Merge Statement (LCK_M_RS_U locks) #17

Open LitKnd opened 3 years ago

LitKnd commented 3 years ago

Migrating comments from wordpress

Blocking Merge Statement – Curated SQL May 26, 2016 5:15 am […] Kendra Little walks through the MERGE command and potential blocking issues with it: […]

Loading...

Reply Test an Insert for Race Conditions with OStress.exe - by Kendra Little May 31, 2016 8:00 am […] Higher locks that are held longer can mean more blocking, and of course that can slow you down. If new rows come in relatively rarely, I probably want to make my code a bit more complex and only take out the higher locks when I really have to. I also want to make sure my indexes are optimized to help me read and lock the smallest range possible for speed. Check out a sample of that scenario in my prior post. […]

Loading...

Reply Stephen Rojak April 5, 2017 11:32 am I don’t hate MERGE, but your points about locking are well-taken. I would save MERGE for installation/upgrade operations and other one-shot situations where locking is not an issue. My preference in a production OLTP environment would be to have Upsert stored procedures for individual rows, so that the sprocs can be responsible for the locking and abstract that away from the application code. However, multi-row upsert operations, particularly where an OUTPUT clause is needed, muddy the waters. In the end, the application developer cannot go on blissfully unaware of transaction scope, concurrency and locking.

Loading...

Reply Kid February 26, 2018 7:36 am You are my teacher 🙂

Loading...

Reply Ben December 15, 2020 11:14 pm Don’t you have a race condition here where two processes call with the same parent ID, both find no rows, both enter the “is null” block, and whichever one runs the insert first gets nothing in the output because nothing is inserted and then goes on to try to insert a null parent ID?

nairobi21 commented 1 year ago

Hi Kendra, We are dealing with something similar in our Databases, we have an issue (lot of locks/Waits) in the SINK DB running CDC replication We have implemented Debezium with kafka, we are experience a performance issue when both, source and sink connectors, are working at the same time the consumer performance decrease significantly, here some number of what we have: 1 Kafka Cluster with 3 nodes (8GB ram -Xms: 4096m -Xmx: 4096m each) - All the topics with Replication Factor = 3 1 Zookeper Cluster with 3 nodes (4GB RAM -Xms: 2048m -Xmx: 2048m each) 1 Connect Cluster with 5 nodes (12GB RAM c/u - -Xms: 6168m -Xmx: 6168m) with 1 Partition and 1 task but we have tried with 1 / 5 / 100 partitions and 1 / 5 / 15 / 100 task 1 Source Connector per table (4 tables) 1 Sink Connector per Table (4 tables) We are using Schema Registry & Avro Converter and the image below will show you whats happening in the destination DB: image

Can you help me ? How can we address it? I cannot change the MERGE Sentence, We have a Clustered PK for the match value.