rails / solid_queue

Database-backed Active Job backend
MIT License
1.74k stars 95 forks source link

Some jobs failing due to ActiveRecord::Deadlocked when trying to create a ScheduledExecution #162

Open andbar opened 6 months ago

andbar commented 6 months ago

We are seeing some failed jobs due to hitting a deadlock when solid queue is trying to create the ScheduledExecution for the job. This is usually happening for us on jobs that are having to be retried due to a throttling constraint we are dealing with from an external api. Here is one example, with part of the backtrace. The job attempts to execute, gets the throttling constraint so it tries to schedule a retry, and it looks like it's trying to do the ScheduledExecution.create_or_find_by! on line 40 of app/models/solid_queue/job/schedulable.rb when it hits the deadlock on the insert.

Screenshot 2024-03-01 at 12 19 12 PM

Backtrace:

/var/www/shield/vendor/bundle/ruby/3.2.0/gems/mysql2-0.5.5/lib/mysql2/client.rb:151:in `_query'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/mysql2-0.5.5/lib/mysql2/client.rb:151:in `block in query'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/mysql2-0.5.5/lib/mysql2/client.rb:150:in `handle_interrupt'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/mysql2-0.5.5/lib/mysql2/client.rb:150:in `query'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/mysql2/database_statements.rb:100:in `block (2 levels) in raw_execute'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:1028:in `block in with_raw_connection'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/concurrency/null_lock.rb:9:in `synchronize'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:1000:in `with_raw_connection'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/mysql2/database_statements.rb:98:in `block in raw_execute'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/notifications/instrumenter.rb:58:in `instrument'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:1143:in `log'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/mysql2/database_statements.rb:97:in `raw_execute'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract_mysql_adapter.rb:233:in `execute_and_free'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/mysql2/database_statements.rb:23:in `internal_exec_query'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:153:in `exec_insert'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:191:in `insert'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:25:in `insert'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/persistence.rb:588:in `_insert_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/persistence.rb:1252:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/counter_cache.rb:187:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/locking/optimistic.rb:84:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/encryption/encryptable_record.rb:184:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/attribute_methods/dirty.rb:240:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/callbacks.rb:445:in `block in _create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/callbacks.rb:110:in `run_callbacks'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/callbacks.rb:952:in `_run_create_callbacks'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/callbacks.rb:445:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/timestamp.rb:114:in `_create_record'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/persistence.rb:1221:in `create_or_update'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/callbacks.rb:441:in `block in create_or_update'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/callbacks.rb:110:in `run_callbacks'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/callbacks.rb:952:in `_run_save_callbacks'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/callbacks.rb:441:in `create_or_update'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/timestamp.rb:125:in `create_or_update'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/persistence.rb:751:in `save!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/validations.rb:55:in `save!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/transactions.rb:313:in `block in save!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/transactions.rb:365:in `block in with_transaction_returning_status'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:342:in `transaction'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/transactions.rb:361:in `with_transaction_returning_status'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/transactions.rb:313:in `save!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/suppressor.rb:56:in `save!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/persistence.rb:55:in `create!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:918:in `_create!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:118:in `block in create!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:929:in `_scoping'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:467:in `scoping'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:118:in `create!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:229:in `block in create_or_find_by!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/transaction.rb:535:in `block in within_new_transaction'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activesupport-7.1.3.2/lib/active_support/concurrency/null_lock.rb:9:in `synchronize'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/transaction.rb:532:in `within_new_transaction'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:344:in `transaction'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/transactions.rb:212:in `transaction'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation/delegation.rb:105:in `transaction'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/relation.rb:229:in `create_or_find_by!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/activerecord-7.1.3.2/lib/active_record/querying.rb:23:in `create_or_find_by!'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/solid_queue-0.2.1/app/models/solid_queue/job/schedulable.rb:40:in `schedule'
/var/www/shield/vendor/bundle/ruby/3.2.0/gems/solid_queue-0.2.1/app/models/solid_queue/job/executable.rb:65:in `prepare_for_execution'
rosa commented 6 months ago

@andbar, could you share the LATEST DETECTED DEADLOCK section if you run SHOW ENGINE INNODB STATUS in the DB server where you have Solid Queue tables?

andbar commented 6 months ago

Here you go. Thanks for looking into it. Let me know if you need any more info.

latest_detected_deadlock.txt

rosa commented 6 months ago

@andbar, I've looked into this and I see why the deadlock is happening but it's not clear to me why the transaction (2) is locking 361 records in the index_solid_queue_dispatch_all index of solid_queue_scheduled_executions while only trying to delete 18 rows. Could you let me know:

Thank you! 🙏

andbar commented 6 months ago

Yep, here they are:

solid_queue (0.2.1) 8.0.mysql_aurora.3.03.1

dispatchers:
    - polling_interval: 1
      batch_size: 500
rosa commented 6 months ago

Agh, @andbar, I was completely stumped because I thought the code you were running included this change https://github.com/basecamp/solid_queue/pull/155/commits/a30c2cb47a8f872c89f1643f80dec87e5f001691 that I got about 3 weeks ago (GitHub shows last week because I had rebased) and that we're running in production because we're running the branch with support for recurring tasks. I noticed that one while I was working on recurring jobs because I hit a super similar deadlock, and fixed it there. Then, when you reported this one and I looked at your SHOW ENGINE INNODB STATUS I was like "how can this be possible?!" 😆 🤦‍♀️ In my head, I had already shipped that fix ages ago. I think the deadlock is due to that missing job_id in the ordered scope.

I'm going to try to ship a new version with support for recurring jobs and that fix so you can test it. I'm on call this week and had a busy Monday, but hopefully will get to it tomorrow!

andbar commented 6 months ago

Ah! Haha, so easy to do. We'll be glad to test that fix with our situation. I'll watch for a new version. Thank you!

rosa commented 6 months ago

Thank you so much! I just shipped version 0.2.2 with this fix, as I didn't have time to wrap up the recurring jobs PR, so I decided to just extract that fix. Could you try this one and see if you still encounter the deadlock? 🙏 Thank you!

andbar commented 6 months ago

Hi, @rosa, we deployed that new version and thought it might have fixed it but unfortunately we got some more deadlocks today. Here's the latest deadlock log from the db, hopefully it helps pinpoint what might be causing it.

latest_deadlock.txt

rosa commented 6 months ago

Ouch 😞 Sorry about that, and thanks for the new debug info. I'll continue looking into it.

andbar commented 6 months ago

Thanks, @rosa. I haven't had a chance to look at it much myself before this, due to some other needed work in our project that's using this gem, but I'm hoping to be able to dig into it further. I don't have much experience with deadlocks, though, so I'm trying to brush up on that first 😬.

From a first glance, it appears that maybe the issue is locking due to both transactions (the insert and the delete) needing to lock this index: index_solid_queue_dispatch_all? Or more accurately, the insert appears to lock the PRIMARY index first and then tries to acquire a lock on the index_solid_queue_dispatch_all index, while the delete is going in the opposite direction and locks the index_solid_queue_dispatch_all index first and then tries to acquire a lock on the PRIMARY index. Does that sound right?

Maybe that's why the delete transaction (transaction 2 in the logs) shows "5336 row lock(s)" even though it's only deleting something like 19 jobs - b/c of the index_solid_queue_dispatch_all index?

rosa commented 5 months ago

Hey @andbar, sorry for the delay! I haven't forgotten about this one, but I've been working on other stuff in the last few weeks. I'm back looking at this, and I wonder if https://github.com/rails/solid_queue/pull/199 might help in your case. Would you mind trying that one out if you're still using Solid Queue and experiencing this problem?

andbar commented 4 months ago

Hi @rosa, I just realized I hadn't responded to you yet, I apologize for that. We ended up moving away from Solid Queue to using something else that was just a better fit for our particular needs due to the nature of the jobs we're running, so unfortunately I won't be able to test that fix. I'm sorry!

rosa commented 4 months ago

Hi @andbar! Oh, no worries at all! Thanks so much for letting me know, really appreciate you taking the time to test and report this and your patience through the troubleshooting 🤗

paulhtrott commented 4 months ago

Hi @rosa 👋🏽

We tried your fix #199 at Pressable, it fixed our deadlock issue

Rails 7.1.2 ruby 3.2.2 MariaDB 10.6.17 (trilogy)

Thank you! 🚀

paulhtrott commented 4 months ago

Hi again @rosa. I spoke too soon. #199 did not completely get rid of our deadlock issue, but it did reduce it.

rosa commented 4 months ago

Hey @paulhtrott, ohhhh, thanks for letting me know! It's a bummer it didn't solve it completely, but I'll take the reduction. I'll merge that and will start using it in HEY, as we're going to increase the number of scheduled jobs per day in ~4M tomorrow, so hopefully we'll hit this ourselves and that will give me more info to think of other solutions.

paulhtrott commented 3 months ago

Hi @rosa we have had zero luck being able to diagnose our deadlock issue. This is how our architecture is structured:

Our errors do not show much details outside of the following, plus a stack trace (attached):

We have resorted to setting wait_untils for most of our jobs, it seems the delay helps on most occasions, but it is inconvenient in some cases.

Are there any other details that might be helpful for you?

stack_trace.txt stack_trace_2.txt db_logs.txt

rosa commented 3 months ago

Ohhh, @paulhtrott, that's super helpful! This looks like a different kind of deadlock than the one I tried to fix! Let me try to put together another fix for this one.

rosa commented 3 months ago

That deadlock is the same one as https://github.com/rails/solid_queue/pull/229, going to tweak that one a bit and ship.

rosa commented 2 months ago

@paulhtrott, I just released version 0.3.3 with a possible fix for this one deadlock: #240.

paulhtrott commented 2 months ago

@paulhtrott, I just released version 0.3.3 with a possible fix for this one deadlock: #240.

Thank you @rosa , we will give that a try today, I'll report back after a couple of days 🎉

paulhtrott commented 2 months ago

Hi @rosa I'm back sooner than I wanted to be 😆 . We are still having the issue after 0.3.3. Same two stack traces basically.

rosa commented 2 months ago

Ohh, bummer 😞 I'll continue working on it. Thanks a lot for letting me know!

rosa commented 2 months ago

Oh, @paulhtrott, I realised something... The query done now to delete records from solid_queue_ready_executions should be using the primary key instead of job_id, so I wonder if the locks that both transactions are waiting are still in the primary key index or another index... If you have the output from the LATEST DETECTED DEADLOCK section when you run SHOW ENGINE INNODB STATUS, would you mind sharing it?

Thank you so much again 🙏

paulhtrott commented 2 months ago

Hey @rosa! Sure, here is the output

deadlock.txt