brendon / acts_as_list

An ActiveRecord plugin for managing lists.
http://brendon.github.io/acts_as_list/
MIT License
2.04k stars 355 forks source link

PG::TRDeadlockDetected #235

Closed seanarnold closed 3 months ago

seanarnold commented 7 years ago

We seem to intermittently get Deadlock errors with acts_as_list and Postgres.

PG::TRDeadlockDetected: ERROR: deadlock detected DETAIL: Process 18092 waits for ShareLock on transaction 53198930; blocked by process 30411. Process 30411 waits for ShareLock on transaction 53198931; blocked by process 18092. HINT: See server log for query details. CONTEXT: while updating tuple (15,28) in relation "payment_methods" : UPDATE "payment_methods" SET "updated_at" = '2016-10-20 00:52:19.754301', "sort_order" = ("payment_methods"."sort_order" - 1) WHERE (user_id = 'user_id' AND deleted_at IS NULL) AND ("payment_methods"."sort_order" > 1) AND ("payment_methods"."sort_order" <= 3 AND "payment_methods".id != 'pm_id')

Is this a known issue? I'm using acts_as_list (0.7.7).

brendon commented 7 years ago

Not that we're aware of. Can you try the latest version and see if that helps? Is it to do with with conflicting ORDER BY on this query vs another also updating records? I had a similar deadlock problem and it turned out to be due to ordering being overridden resulting in updates occurring from opposite ends of the table and clashing (that's a very technical explanation!).

We do lock a record using with_lock. I can't remember which version that came out on, so you could try the version before that to see if that is the problem. We're looking at removing or making this lock better.

ddengler commented 7 years ago

I can confirm this. Seems to happen as a combination of validate uniqueness and multiple requests hitting the servers at the same time. Currently we delay our requests (chain them to be serial requests) to prevent the deadlock issue.

brendon commented 7 years ago

Thanks @ddengler, are you using the latest version?

We really need better locking I think. If you have any experience with that side of things we'd appreciate the help. I think there's an open issue about this.

@seanarnold, have you tried the latest version?

ddengler commented 7 years ago

@brendon yes, we are using the latest version. I will ask someone on my team to have a look at it again. Might take a while, as we currently have the issue resolved and some other priorities to work on.

ddengler commented 7 years ago

@brendon can you point me to the other open issue? I could not find it right away – only an old PR #135

brendon commented 7 years ago

@ddengler, sorry I've had a look myself and can't find anything but that either. The discussion was around needing to lock all the records within a scope rather than just the record being updated (as we do now). This means that the shuffling that occurs when an item is moved will be atomic (is that the right term?). There is also an issue where the scope is empty but two threads attempt to create the first record in that scope simultaneously. We need a way to lock that empty scope also, and I'm not sure if that's possible with SQL at all?

I hope that helps :D No worries about the priorities. Gotta work on the important stuff first :D

swanandp commented 7 years ago

Some of this can be solved by using a lock on the parent record. Like in #254

brendon commented 7 years ago

@seanarnold I recently removed the only lock that we do in acts_as_list. I'd be interested to see if this fixes your deadlock issue.

brendon commented 6 years ago

@seanarnold, we made a change the other day (released as the latest version of the gem) that removed another point of contention. Can you try the latest gem to see if your deadlocks subside?

brendon commented 6 years ago

@seanarnold, is this still a problem for you?

ddengler commented 6 years ago

@brendon sorry for not getting back to you on this. Sadly I cannot remember what we did back then to "fix" the issue for us. I just checked our error reporting again to be sure. Issue did not pop up for us anymore. We are currently using the latest version.

seanarnold commented 6 years ago

We still see the issue in 0.9.9. Have changes to the locking behaviour been done since then?

brendon commented 6 years ago

Not since then. We still only lock in the insert_at_position() function. If you have any ideas for how to solve this I'd be more than grateful :D

brendon commented 6 years ago

It's probably more the implicit locks that come from updating large portions of the table at once (when shuffling positions etc...).

vi-huynh commented 6 years ago

I get deadlock PG::TRDeadlockDetected is same @seanarnold.
I'm using newest version 0.9.11 and update position with 212 rows.

Here is backtrace

- acts_as_list (0.9.11) lib/acts_as_list/active_record/acts/position_column_method_definer.rb:34:in `block (2 levels) in define_class_methods'
 - acts_as_list (0.9.11) lib/acts_as_list/active_record/acts/position_column_method_definer.rb:26:in `block (2 levels) in define_class_methods'
 - activerecord (4.2.7.1) lib/active_record/relation/delegation.rb:70:in `block in decrement_all'
 - activerecord (4.2.7.1) lib/active_record/relation.rb:302:in `scoping'
 - activerecord (4.2.7.1) lib/active_record/relation/delegation.rb:70:in `decrement_all'
 - acts_as_list (0.9.11) lib/acts_as_list/active_record/acts/list.rb:367:in `shuffle_positions_on_intermediate_items'
 - acts_as_list (0.9.11) lib/acts_as_list/active_record/acts/list.rb:418:in `update_positions'
vi-huynh commented 6 years ago

@ddengler I just reproduce deadlock on development environment. I config puma worker is 3 and try send multiple request to server. And then get deadlock error.

brendon commented 6 years ago

@vi-huynh, if you could do some more digging into this to figure out why, that would be helpful :) Keen to get this fixed :)

ddengler commented 6 years ago

@vi-huynh makes sense to me as we fixed this by not doing our operations in parallel anymore. We are also using puma but now with workers set to 0 (= 1) and multiple threads within a docker container ( multiple containers within a cluster).

brendon commented 6 years ago

Thanks @ddengler for offering some more info :) I'm at a loss as to the cause but it's probably to do with table scan's locking rows even if they're not the rows been updated. Could indexing help alleviate the problem?

swanandp commented 6 years ago

Some of this can be solved by using a lock on the parent record. Like in #254

Are you getting a deadlock with or without this?

brendon commented 4 years ago

Why are the requests not all ordered by the position column (either all 'asc' or 'desc')? It's not very surprising that there are deadlocks happening: if you want to access several resources, the only scenario where deadlock is guaranteed to not happen is when concurrent requests for the same resources are locked in the same order.

@patleb, I'm happy to look at any solutions you might have to this problem :)

patleb commented 4 years ago

@brendon, I realized that it wasn't an ordering problem on the position column, but the id column. Simply put, every request would have to be ordered by id (either all 'asc' or 'desc') when fetching records and then apply the transformation you want on the subquery.

It would solve the problem at the source, but you would have to pretty much rewrite the whole gem to make this happen... which might not be worth the effort.

brendon commented 4 years ago

Yea it's been an annoying problem for a long time. I don't have the time to rewrite anything, but am always happy to review PR's :D

brendon commented 3 months ago

Closing for now. Check out https://github.com/brendon/positioning which uses an advisory lock to (hopefully) prevent deadlocks.

patleb commented 3 months ago

It'll work, but you're essentially preventing parallelization.

brendon commented 3 months ago

As mentioned, I'm open to other ideas. The problem is, it needs to be something encapsulated so that it's transparent to the user of the gem. I also don't see how you could expect parallelised updating to work at all with a single list of positions. The list operations have to be done in sequence unless I'm missing something.

This has been a long running issue but so far not much has surfaced in the way of any solutions.

patleb commented 3 months ago

You need to make some compromises somewhere, but my previous explanation of why the problem occurs still hold. If you keep the group of rows ordered by id when you want to change the position, then you won't have a deadlock. It breaks when you have, for example, a process updating ids (1,2) and another process ids (2,1) where the former locks id 1 and the latter id 2 both at the same time. Then, they can't release their lock until ids 1 or 2 is releases which will never happen unless you detect the deadlock.

I did implement my own list management after our last encounter here. The biggest compromises I made are:

I know that the first compromise could be mitigated/solved by the Stern–Brocot tree implementation, but I didn't bother since the applications I work on have a very low churn of repositioning.

brendon commented 3 months ago

Thanks @patleb, that's great insight. I totally understand that deadlock issue. Unfortunately the solution of ordering on UPDATE only works with MySQL. Postgresl and Sqlite don't allow an explicit update order. In my gem I work around this (for the uniqueness constraint) by inverting the affected records positions to negative values and then re-inverting them with the adjustment up or down. Not so great, but it works and is predictable. This definitely requires the advisory lock. Ideally the lock would just be around the list management stuff but I end up having to wrap the entire transaction for a reason I can't remember.

Your list management looks more like ranked-model now in that it's sparse rather than sequential integers. I'm glad that you found a solution that works :)

brendon commented 3 months ago

Actually, I see that you're using a self referential join to overcome the update order problems. I might look into that for my gem. Does that actually force the records in the table to update in order? I would have thought the order of the self-join wouldn't have any effect?

In terms of order corruption, I noticed it being most prevalent (at least in the case of sequential integers) when processes were executing in parallel and were relying on stale information about the end of the list etc... An advisory lock was the only thing I tried that seemed to work.

patleb commented 3 months ago

If you're referring to this, then it's the only way I found for an ordered update. Although, in this case, there's a table lock and it's a window over the whole table where each id has a new position as an integer starting from 0. I think that you could try something similar with a limited window over your chosen ids.

In my case, this method is called periodically in a cron job to check if there's a need rewrite the positions. Since I wrote that, not once this method has been called to actually rewrite the positions. We mostly append records where this functionality is used and where there's a more frequent usage, there's only a handful of people (as admin) moving the records.

The actual code running in the request is built around this method where we assign a new position as the midpoint between the adjacent rows of choice.

patleb commented 3 months ago

To actually answer your first question: I don't know if postgres guarantees the update in the specified order, but my guess is that it would, since it's based on an on-the-fly list instead of the underlying table, so there's no previous data structure that postgres would reuse.

For the order corruption part, it makes sense that if there's no guarantee of reserved ordering between two requests acting on intersecting lists, then the assumptions about the limits wouldn't be stable. To prevent this from happening, I made the position global to the table and the position is always the result of the midpoint between two rows, so it's not absolute. For example, position 10.125 means something only with other positions (ex.: [10, 10.125, 10.5] --> [1, 2, 3]).

brendon commented 3 months ago

Thanks @patleb, that all makes sense. Unfortunately people usually use scopes (in my experience).

I tried to mitigate things further by encouraging people to position items relative to other items rather than absolutely (though both work).

patleb commented 3 months ago

You still can use scopes, it's just that those aren't relevant when updating the position since it's a relative position to adjacent rows globally, not the ones from the scope. Even if your scope filters some rows, you would still get the correct ordering.

patleb commented 3 months ago

Just to be clear, I'm talking about my implementation, there's no overlap that I'm aware of about the strategies and assumptions you used in this gem and ranked_model... so, you can't really apply what I did, here.

patleb commented 3 months ago

Also, I'm not suggesting to add this to your gems, but I shared my work simply because you said that you don't know how it would be possible to parallelize updating a list. This is how it's done. It could be better as I exposed in the compromises I made, but it allows you to have row-level locks on the absolute minimum number of rows possible while updating the position and preventing deadlocks.

brendon commented 3 months ago

You still can use scopes, it's just that those aren't relevant when updating the position since it's a relative position to adjacent rows globally, not the ones from the scope. Even if your scope filters some rows, you would still get the correct ordering.

That's an interesting approach. So do items within different scopes end up clumped together within the global ordering or are they generally dispersed as they're added and reordered?

brendon commented 3 months ago

Also, I'm not suggesting to add this to your gems, but I shared my work simply because you said that you don't know how it would be possible to parallelize updating a list. This is how it's done. It could be better as I exposed in the compromises I made, but it allows you to have row-level locks on the absolute minimum number of rows possible while updating the position and preventing deadlocks.

Thanks @patleb, that all makes sense. Thanks for sharing. Always happy to be learning more! :D

patleb commented 3 months ago

It's a bit complicated to explain it all, there's a lot of cases to take care of: at top level, there's 6 possible branches and in each of them there's more conditional branches... so, I'll try to keep it simple. To answer your question, let's take one possible scenario:

It may not be obvious, but whatever happens to the whole list doesn't matter, what matters is the previous row, the immediate next one and the one we want to update. Even if there are multiple processes trying to insert a row within the same adjacent rows, the unique constraint will raise and you would have to retry with a new adjacent row which would be the one the other process might has inserted.

So do items within different scopes end up clumped together within the global ordering or are they generally dispersed as they're added and reordered?

Scope are merely filters in this situation, if you have two scopes which fetch for one [{id: 1, pos: 1.0}, {id: 3, pos: 1.5}, {id: 2, pos: 2.0}] and the other [{id: 1, pos: 1.0}, {id: 2, pos: 2.0}], only the ordering matters, you would have to let go of the absolute positioning idea: it's the position relative to one another that defines the order.

patleb commented 3 months ago

The only time where there might be reordering globally is if the position number has too many decimals (ex.: 1.125135125125125125125...5 up to 64 decimals). Otherwise, either the new position is the midpoint or the lowest immediate integer position minus 1.0 or the highest immediate integer position plus 1.0. The dispersion is handled by default with the NUMERIC type in postgres.

brendon commented 3 months ago

Thanks again @patleb. That's a good locking strategy that would apply easily to ranked-model I think. If I'm ever actively developing in that space again I'll look into it.

Certainly it's never possible to honour two requests for an identical move or insert because the second will inherently rely on stale data. My push for relative positioning in the new gem (e.g. position: {before: Item.find(33)}) was an alternative to throwing an error and just trying to honour the general intent of the request (e.g. it could potentially placed before another item that wanted to be before 33 just moments ago too, but at least they'd both be before 33. So many ways to skin this cat eh! :D