LitKnd / littlekendracomments

1 stars 0 forks source link

Post: Why ROWLOCK Hints Can Make Queries Slower and Blocking Worse in SQL Server #6

Open LitKnd opened 3 years ago

LitKnd commented 3 years ago

Comment history imported from wordpress

Beware ROWLOCK Hints – Curated SQL February 8, 2016 5:06 am […] Kendra Little points out that ROWLOCK hints might make blocking worse: […]

Loading...

Reply Maciej Łuszczyński September 13, 2016 1:00 am Hi Kendra,

Awesome article, many thanks!

I’m wondering if you could advise in more complex scenario, e.g. if there are also some additional indices on a table built upon a column being updated. Does it make any sense to add WITH (ROWLOCK) hint for the update statement? Or the update statement is going to escalate the locks to a table lock for sure and the hint only would cause more deadlocks (which can I see on a live database at the moment)?

Best regards Maciej

Loading... Reply Kendra Little September 19, 2016 8:36 am I hate saying never use something, because very often in practice there does tend to be a use case.

Ideally, if an update is well indexed and can find the rows efficiently, it should make the decision between row and page lock efficiently. Most of the time allowing that option for it to choose a page lock is just helpful at reducing the chances that it will attempt escalation.

One note– escalation isn’t for sure, and it can’t always get escalation because of other users of the table. So it might attempt and never actually get that escalation. Or it might just happen sometimes. That “sometimes” is the most confusing part of troubleshooting, I think!

Loading... Reply JRStern January 3, 2017 4:58 pm Two things. First how many row locks versus how many page locks are we talking about, and is extended events reporting on shared row locks taken as a table is scanned, if page locks are not available? Second, are we looking at microseconds here, so that even the slow execution is on the order of 0.25 seconds (using a parallel plan)? That might still be 10x slower than with page locks, but still fast enough to live with, if there are other reasons to want to avoid page locks. Or maybe not fast enough to live with if it puts a burden on scanning, where we typically don’t think about locking.

I ask because I have a situation, our main table has a lot of contention and I was seriously looking at disabling page locks, or per this article at least using rowlock on specific queries if leaving it on for the table. If simply scanning a table (which we shouldn’t have to do) or a fat cluster in an index (which we do have) is that much slower when page locks are disabled … that would probably be a stopper.

I guess these locks are generated by a scan? Because I wouldn’t expect your 2m row database to have all that many Shakira’s to generate that many locks otherwise. Or am I wrong about the count of Shakira’s? I guess I could download your database to answer some of these by inspection.

Thanks,

Loading... Reply Kendra Little January 5, 2017 1:27 pm “First how many row locks versus how many page locks are we talking about…” This is a small, narrow table, only 45MB. It was 1,825,433 row locks vs. ~5,500 page locks. I was actually surprised that there was a consistent measurable difference on such a small table! The numbers reported here are in microseconds. As your data sizes grow, it can be more significant, but even “more significant” might be small enough not to matter, for many applications.

My main points writing the post was that most people don’t consider that more granular locks can lead to more blocking (if you take out a lot), and that it makes lock escalation more likely. So while I sometimes will use a rowlock hint, I try to consider other alternatives before implementing them.

Re-reading the post almost a year later, I’m surprised I didn’t talk about index tuning more as a way to reduce locks. The other workarounds I mention are valid as well, but index tuning can absolutely help reduce the lock footprint. Of course, there’s many environments where you can’t index for every query and you’re going to have scans sneak through, too.

Essentially, for your main table, I’d try to find the most frequent blocking queries and look at indexing (nonclustered, possibly columnstore stuff depending on database usage and your version), query tuning, and possibly RCSI or snapshot ISO, in addition to the option of changing the locking options allowed. They all have trade-offs, but it’s quite possible you can get better performance overall with one of the other options.

ps: your comment was quite thought provoking, and make me realize that I want to look into locking implications and configuration options for the newer nonclustered columnstore indexes on OLTP tables. Got that on my list of things to research.

Loading...

Reply JRStern January 5, 2017 7:20 pm It’s a bit of an unusual situation, at least for a SQL Server system in this day and age. The clustered PK has four fields, and most of the locking specifies at least the first two, and then one or more additional fields not in the PK. Often there is a “top(@n)” involved, too, and of course embedding in a transaction.

At some point we will likely change it so it at least has an int identity PK, even if the four fields stay as the CK, because of some more complex challenges. It’s almost a textbook case of why the identity PKs are so common in SQL Server, but it takes a bit of a trip to come to the conclusion – even if the conclusion is actually known in advance.

Thanks for your great site, always something here to pick up!

Loading... Reply Sriki February 13, 2017 8:54 am Hi Kendra,

very well written and more importantly its to the point. I found this site when looking around about queryhints, in particular lock hints. This helped with the understanding a bit better. One question i do have though and its based on the problem im currently trying to resolve. Issue at hand: Deadlocks in production, between two stored procedures hitting a common table, one trying to update while the other trying to read. The read sproc has a query hint of ( ReadCommittedLock) which is put in there, i presume, to avoid phantom reads. Due to this hint the deadlock occurs and the read proc is chosen as victim. On looking at the trace i found that there were page locks ( multiple due the max dop set to 6).

Question: Is there a way to avoid phantom reads in the select proc without using locks that cause the deadlocks?

Thanks Sri

Loading... Reply Kendra Little February 13, 2017 9:04 am Thanks for the kind words.

What isolation level is the rest of the query run under?

Typically ReadCommittedLock is used if the database has Read Committed Snapshot Isolation (RCSI) enabled, and for some reason it was chosen to not read the “versioned” data for that particular table. However, read committed is prone to phantom reads. RCSI statements aren’t prone to phantom reads within that statement. Within a larger transaction, you can protect from phantom reads by using snapshot isolation (and if RCSI is enabled, you’re already doing the versioning anyway).

Since it’s possible the session is using serializable isolation level or something else, I’m not going to go too far down those rabbit holes, though!

Loading... Reply Optimizer- Oracle and SQL Server, Hints - DBA Kevlar June 19, 2017 6:35 pm […] world than in the Oracle one. I have to send some love and attention to Kendra Little after I found this cartoon she drew in regards to her frustration with the use of ROWLOCK […]

Loading... Reply Kyle Dooley October 9, 2017 10:41 am Thanks for the article. I am trying to delete 10 years of data out of a VLDB. I am having some blocking issues with the script. I broke it down to 10k Rows at a time for each table as it goes to the bottom of the relationships and works backwards. I also have a master loop that goes a month at a time for the entire thing. Everyone wants me to use rowlock and i have on many of the queries. There are over 400 delete queries. Do you have any suggestions? Much appropriated.

Loading... Reply Kendra Little October 10, 2017 6:49 pm Indexing the deletes so they can efficiently find the data they need to remove is the simplest big-picture advice– but with 400 delete queries, that sounds like a job.

Not to ask a dumb question, but if the 10 years of data is a significant chunk of the size of the database, could it be less work to migrate the remaining data to a fresh database? If the data you’re deleting contains LOB data, or you also want to do some tasks like optimize your file/filegroup layout and reclaim space, that can sometimes be easier in the long run, depending on the situation.

Loading... Reply

steveconroy88 commented 2 years ago

Hi Kendra, I have these crazy thoughts sometimes. I'm wondering if there's another way to nudge sql server towards row locking... change the fill factor on the clustered (and probably all) indexes to "1". Would that force 1 row per page and possibly "help" sql server stay with the equivalent of a row lock? On a very small table (100-200 rows) I am not worried about fragmentation when all queries are targeting one specific row.