Closed zarqman closed 1 year ago
@zarqman thank you for trying the bleeding edge and opening this issue. That is not good!
I just pushed up #797. Could you give that branch a try?
I wasn't able to reproduce what you're seeing with a trivial reproduction application, but I fully believe that there can be autoloading problems/deadlocks with current Rails HEAD. I'm fairly certain this is related to recent updates to Active Record, but can't pinpoint yet exactly what. Something like either:
@bensheldon, thanks for looking into this!
I just tried #797 @ fb3e386 and unfortunately got another error # 6 (map_types!
) on 3 of 4 tries.
@zarqman nuts! Would you be able to share your project with me or set up a repro project?
I've setup a repro project here: https://github.com/zarqman/goodjob-rails71-test
I'm able to trigger some sort of error with foreman start goodjob
around 1 in 30 times. That's much less often than in the original app I've been testing against, which is 1 in 2 or more.
I did a lot of exploring as to why that might be and I think it comes down to how much code is being loaded by the Rails reloader. The proxy for that seems to be how many files are in app/**/.rb
. I did not find any particular difference caused by gems, initializers, or various Rails configs. As you'll see in the test app, those parts are nearly 100% stock.
I haven't yet determined if my added parameters in database.yml
make a difference. Since those are sent at the beginning of the connection, I surmise they may slightly add to connection overhead and help trigger things, but I'm still unsure.
By setting config.eager_load = true
, I've been unable to trigger the issue even in the original app. That makes some sense to me as that changes the Rails reloader behavior.
I also happened to see several instances of what appear to be unexpected inbound messages (from PG to Rails) on both 3.7.4 and fb3e386
. Here's an indicative example--the message types and quantity varied.
The most common error continues to be # 6, however.
Don't know if this is related or not, but I'm also sometimes seeing GoodJob take extra long to shutdown. Instead of the normal < 1 second, it ends up being 5 (at which point foreman
forces termination--not sure how long it would sit otherwise). Happens on 3.7.4 and fb3e386
. In the repro app, it's around 1 of 10 times.
Thank you for the help with the reproduction repo 🙏🏻 I haven't had much luck getting a reproduction of these exceptions, but I have seen the situation where GoodJob fails to shutdown on SIGINT and foreman has to send it a SIGKILL. That's not right.
Here's my debugging log 😊 https://ruby.social/@bensheldon/109713413526769074
I was running these in separate terminal windows to try to trigger the deadlock:
while true; do touch app/jobs/application_job.rb; echo "sleep"; sleep 1; done
loop { TestJob.perform_later; sleep 0.1 }
I feel somewhat confident that the problem is that the database connection is being accessed by multiple threads and that's bad. But I'm not quite sure which threads would be doing it. Still figuring that out.
@bensheldon I don't know if I clarified how I'm triggering the errors. They all happen at launch. So, my process has effectively been:
human_loop do
foreman start goodjob
sleep 2 # or so ... until GJ is loaded and stable
kill foreman with ^C
wait until foreman/goodjob are finished # explains the many opportunities for the shutdown hang :)
end
Based on the variety of errors, I agree that the likely problem is the db connection being shared across threads.
Your note of GoodJob::Execution
being missing caught my attention. I would think that class would be require
d and not subject to eviction by the reloader. That makes me wonder what else might be unloaded that's not supposed to be, and if that would shine a light on the rest of this? (And hopefully isn't just a distraction from the original issue at hand.)
@zarqman fyi, I released the change in #797, and there's been a few other small changes, and I'm also now running CI against Rails HEAD. Maybe there could be an improvement 🤞🏻
@bensheldon I just had a chance to try v3.12.1 and same issue, unfortunately. 😢 Wish I had a better report!
@bensheldon I spent some time this evening trying to debug this further.
A PG::Connection
object is definitely being shared across two different threads (GJ's Notifier and Scheduler) and running overlapping queries, which is the source of the variety of errors.
Near as I can tell at this point, the connection that Notifier
tries to reserve for itself in its own with_connection
is somehow getting reused by Scheduler
/JobPerformer
when looking for work.
I'm suspicious that the DB connection is being cached somewhere inside GoodJob::Execution
, since that's where Notifier
reserves its connection and what JobPerformer
uses to look for work. The backtraces point to schema operations, so the cached connection might be tied up in there somehow.
Even though Notifier
successfully removes the connection from the pool, it's still somehow associated with the Execution
class. When JobPerformer
then looks for work on Execution
, the connection is improperly reused. If Notifier
is still doing its own init work at the same time (usually calling Process.register
), then boom 💥 .
I've explored adding a check in AbstractAdapter#with_raw_connection
to verify whether the connection belongs to Notifier
but isn't Notifier
's thread, and the underlying issue is happening quite often--the two threads don't always execute queries on top of each other, so sometimes we skate by.
What's interesting is JobPerformer
call to Execution
never checks out the doubled-up connection. Execution
seems to somehow already have it, although I haven't concretely identified where it's coming from. As noted above, the backtraces point toward schema operations, hence my suspicion of something being cached there.
It's possible that the underlying issue has been there for some time, but the conservative locking in Rails <= 7.0 compensated for it. With the recent change in locking, it's now exposed.
I can also confirm (and the backtraces in earlier comments show) that ActiveRecord is using NullLock when good_job start
is running. Assuming connections stay within their proper threads, do you have any insight as to whether NullLock is valid and correct?
Thank you, really, for continuing to dig into this. I want to flag the really dumb/genius thing I'm doing which is reusing the Notifier's connection for Process via this thing:
https://github.com/bensheldon/good_job/blob/main/lib/good_job/assignable_connection.rb
I can speculate that something (schema cache?) is happening when the connection is assigned/overridden and then maybe Active Record is storing that connection somewhere unexpected so it's not truly "just for the life of the AssignableConnection block" and it's leaking into usage by other models.
Re:NullLock. I think the intention behind it is correct: connections should implicitly be isolated within threads already so a mutex isn't necessary unless they are, and the only place they possibly could be is in tests / System tests.
I'm pretty sure AssignableConnection is the culprit. Though it would be nice to know how the connection is being leaked out.
@bensheldon, the AssignableConnection stuff might aggravate this a little, but at this point I don't think it's the underlying cause. I'm also not sure it's wrong.
If I'm understanding things right, turns out that schema_cache
is global within each ConnectionPool
, which means it's readily shared across threads. Unfortunately, it appears to have its own copy of connection
, which each calling connection overwrites with itself before performing schema related queries. If 2+ connections/threads are both making schema queries in parallel, it's pretty easy for them to end up sharing that local copy of connection
.
To that end, I can conceive of ways to trigger this in multi-threaded code without wonky connection reassignments. If Notifier used its reserved connection to make queries that triggered schema queries, that would likely trigger this as well--all without AssignableConnection.
I suspect the underlying issue has been there a long time, but when connections had their own locking, that effectively covered over it. With those locks replaced by NullLock
, there's no longer any protection from parallel queries.
@zarqman oh yeah! I see that now. Yeah, I do not think that SchemaCache is threadsafe. To walk through what I'm seeing:
Here is where the current connection (locked to the current thread) is passed down:
...and into this un-threadsafe method with an assignment. There should probably be a mutex around that cached assignment
^^ this is not threadsafe.
I want to propose that this could simply (?) be made threadsafe by turning this attr_accessor into a thread_cattr_accessor and use that instead of the @connection
ivar:
...though I think it would be better (though a much bigger change) for the SchemaCache to instead take a reference to the connection pool and use that to access the current thread's connection without caching a reference to a connection at all. I dunno, what do you think?
Is this something you'd be able to share/propose upstream in Rails/Rails? I think you're spot on.
I want to propose that this could simply (?) be made threadsafe by turning this attr_accessor into a thread_cattr_accessor and use that instead of the @connection ivar:
This was my thought as well, so it's encouraging that you arrived at the same conclusion! :100:
...though I think it would be better (though a much bigger change) for the SchemaCache to instead take a reference to the connection pool and use that to access the current thread's connection without caching a reference to a connection at all.
The question there is what to do with connections that aren't part of the connection pool, such as with Notifier after it removes its connection from the pool. In that particular case, Notifier's thread doesn't have a pool-based connection, but would need one during Process.register (just for the schema parts). Either the external assignment is still needed (in which case, is the bigger change warranted at all?) or a fallback mechanism of checking out a new, separate connection becomes needed (and checking that back in looks tricky at first glance). The fallback is an even bigger change, and would also end up requiring an additional connection from the pool itself--something the Rails folks seem (rightly) hesitant to go for.
Anyway, yes, I'm working on a PR for Rails now as it'd be great to get this fixed at the source.
@zarqman I just saw you opened up https://github.com/rails/rails/pull/38577
Seems like we may just need to wait on Rails Core.
rails/rails#47582 is the new PR (just a typo above?). That other PR (rails/rails#38577) is the same issue though, just with Sidekiq. That makes sense since both GoodJob and Sidekiq spin up multiple threads upon launch.
I've been asked to produce a more real-life test demonstrating the underlying problem, so that's the next step.
Oh no, rails are so slow on accepting fixes (attention to detail). :) @bensheldon has been known to fix my bug reports within minutes from reporting hehe
Shame it was a rails thing but cest la vie
fyi, this is currently in the Rails 7.1 release milestone so I'm hopeful a fix is coming.
@zarqman it looks like the upstream Rails fix has been merged 🎉
@bensheldon, excellent! And thanks again for your help in figuring out the underlying issue!
I'm running Rails 7.1 from HEAD on a new app and have run into several strange exceptions coming from somewhere in the GoodJob / Rails / pg gem stack.
As you'll see, the errors vary. I'm suspicious of a DB connection being accidentally shared between threads, but am not really sure if it's in GoodJob, ActiveRecord, or even the pg gem. It appears that Rails 7.1 made some optimizations to the threading/locking in ActiveRecord back a couple (few?) months ago, but I can't say for sure if that's related.
The errors all seem to happen right when GoodJob is first loading and making its first queries. If things load without error, the process remains stable. Further, depending on the initial error, the process may stabilize afterward. For other errors, something gets corrupted inside ActiveRecord (schema-type information it seems), and subsequent queries continue to error. In one instance, Ruby itself crashed with a segfault.
I'm posting this here to start, but I'm not fully sure if the issue lies within GoodJob or elsewhere.
Context
I'm running GoodJob in a separate process, via bin/good_job, which is where these show up. I do not recall seeing errors from inside the Rails server process (when scheduling GoodJob jobs or otherwise). I have not tried async or any other modes to compare.
good_job
3.7.4rails
7.1.0.alpha from main as of Jan 14, but examples below vary from today back to Jan 1pg
1.4.5ruby
3.1.3, on macOS 12.6.2I also tried 2-3 earlier versions of
pg
and forced a full recompile of 1.4.5. And, I also tried withoutbootsnap
, but that made no difference.The errors
As a general rule, I silence ActiveRecord queries in development, so many of these backtraces don't include the queries. I apologize. I'll start with the newest, since it does show them.
(Numbering comes from my own notes; please excuse skipped numbers.)
# 6 - map_types! - This one seems to be the most common.
``` 13:07:36 goodjob.1 | [GoodJob] [16210] GoodJob 3.7.4 started scheduler with queues=* max_threads=5. 13:07:36 goodjob.1 | (0.8ms) SET application_name = 'GoodJob::Notifier' 13:07:36 goodjob.1 | (0.1ms) LISTEN good_job 13:07:36 goodjob.1 | [GoodJob] Notifier subscribed with LISTEN 13:07:36 goodjob.1 | GoodJob::Execution Pluck (1.9ms) SELECT "good_jobs"."scheduled_at", "good_jobs"."created_at" FROM "good_jobs" LEFT JOIN pg_locks ON pg_locks.locktype = 'advisory' AND pg_locks.objsubid = 1 AND pg_locks.classid = ('x' || substr(md5('good_jobs' || '-' || "good_jobs"."active_job_id"::text), 1, 16))::bit(32)::int AND pg_locks.objid = (('x' || substr(md5('good_jobs' || '-' || "good_jobs"."active_job_id"::text), 1, 16))::bit(64) << 32)::bit(32)::int WHERE "pg_locks"."locktype" IS NULL AND "good_jobs"."finished_at" IS NULL AND (scheduled_at > '2023-01-14 20:07:36.300494' OR "good_jobs"."scheduled_at" IS NULL AND (created_at > '2023-01-14 20:07:36.300494')) ORDER BY COALESCE(scheduled_at, created_at) ASC LIMIT $1 [["LIMIT", 10000]] 13:07:36 goodjob.1 | (0.1ms) UNLISTEN * 13:07:36 goodjob.1 | [GoodJob] Notifier unsubscribed with UNLISTEN 13:07:36 goodjob.1 | [GoodJob] Notifier errored: NoMethodError: undefined method `map_types!' for nil:NilClass 13:07:36 goodjob.1 | 13:07:36 goodjob.1 | conn.async_exec(sql).map_types!(@type_map_for_results).values 13:07:36 goodjob.1 | ^^^^^^^^^^^ 13:07:36 goodjob.1 | ["~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:18:in `block (2 levels) in query'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:989:in `block in with_raw_connection'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activesupport/lib/active_support/concurrency/null_lock.rb:9:in `synchronize'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:961:in `with_raw_connection'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:17:in `block in query'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activesupport/lib/active_support/notifications/instrumenter.rb:58:in `instrument'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:1099:in `log'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:16:in `query'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:99:in `query_values'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb:358:in `primary_keys'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb:145:in `primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/schema_cache.rb:82:in `block in primary_keys'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/schema_cache.rb:80:in `fetch'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/connection_adapters/schema_cache.rb:80:in `primary_keys'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/attribute_methods/primary_key.rb:96:in `get_primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/attribute_methods/primary_key.rb:83:in `reset_primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/attribute_methods/primary_key.rb:71:in `primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/app/models/good_job/lockable.rb:224:in `_advisory_lockable_column'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/relation/delegation.rb:108:in `public_send'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/relation/delegation.rb:108:in `block in method_missing'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/relation.rb:895:in `_scoping'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/relation.rb:433:in `scoping'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-9bdd29c8317a/activerecord/lib/active_record/relation/delegation.rb:108:in `method_missing'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/app/models/good_job/lockable.rb:101:in `block (2 levels) in# 5 - result.clear - I think I've seen this just once.
``` 12:45:21 goodjob.1 | [GoodJob] [11795] GoodJob 3.7.4 started scheduler with queues=* max_threads=5. 12:45:21 goodjob.1 | [GoodJob] Notifier subscribed with LISTEN 12:45:21 goodjob.1 | [GoodJob] Notifier unsubscribed with UNLISTEN 12:45:21 goodjob.1 | [GoodJob] Notifier errored: NoMethodError: undefined method `clear' for nil:NilClass 12:45:21 goodjob.1 | 12:45:21 goodjob.1 | result.clear 12:45:21 goodjob.1 | ^^^^^^ 12:45:21 goodjob.1 | ["~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:825:in `ensure in execute_and_clear'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:825:in `execute_and_clear'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:81:in `exec_delete'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:186:in `delete'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:22:in `delete'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activerecord/lib/active_record/relation.rb:626:in `delete_all'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/app/models/good_job/process.rb:57:in `cleanup'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/lib/good_job/notifier/process_registration.rb:17:in `block in register_process'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/lib/good_job/assignable_connection.rb:32:in `with_connection'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/lib/good_job/notifier/process_registration.rb:16:in `register_process'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activesupport/lib/active_support/callbacks.rb:401:in `block in make_lambda'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activesupport/lib/active_support/callbacks.rb:272:in `block in simple'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activesupport/lib/active_support/callbacks.rb:600:in `block in invoke_after'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activesupport/lib/active_support/callbacks.rb:600:in `each'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activesupport/lib/active_support/callbacks.rb:600:in `invoke_after'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-13783f36ad6c/activesupport/lib/active_support/callbacks.rb:109:in `run_callbacks'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/lib/good_job/notifier.rb:171:in `block (2 levels) in listen'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/lib/good_job/notifier.rb:215:in `with_connection'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.4/lib/good_job/notifier.rb:169:in `block in listen'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/safe_task_executor.rb:24:in `block in execute'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/synchronization/mutex_lockable_object.rb:47:in `block in synchronize'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/synchronization/mutex_lockable_object.rb:47:in `synchronize'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/synchronization/mutex_lockable_object.rb:47:in `synchronize'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/safe_task_executor.rb:22:in `execute'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/ivar.rb:169:in `safe_execute'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/scheduled_task.rb:298:in `process_task'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/timer_set.rb:98:in `block in ns_post_task'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:352:in `run_task'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:343:in `block (3 levels) in create_worker'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:334:in `loop'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:334:in `block (2 levels) in create_worker'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:333:in `catch'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/concurrent-ruby-1.1.10/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:333:in `block in create_worker'"] ```# 4 - Length error in message type Z
``` 12:07:16 goodjob.1 | [GoodJob] [27979] GoodJob 3.7.2 started scheduler with queues=* max_threads=5. 12:07:16 goodjob.1 | [GoodJob] Notifier subscribed with LISTEN 12:07:16 goodjob.1 | [GoodJob] Notifier unsubscribed with UNLISTEN 12:07:16 goodjob.1 | [GoodJob] Notifier errored: ActiveRecord::StatementInvalid: PG::UnableToSend: message contents do not agree with length in message type "Z" 12:07:16 goodjob.1 | 12:07:16 goodjob.1 | ["~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:18:in `exec'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:18:in `block (2 levels) in query'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:989:in `block in with_raw_connection'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activesupport/lib/active_support/concurrency/null_lock.rb:9:in `synchronize'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:961:in `with_raw_connection'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:17:in `block in query'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activesupport/lib/active_support/notifications/instrumenter.rb:58:in `instrument'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:1102:in `log'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/postgresql/database_statements.rb:16:in `query'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:99:in `query_values'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/postgresql/schema_statements.rb:359:in `primary_keys'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb:145:in `primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/schema_cache.rb:82:in `block in primary_keys'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/schema_cache.rb:80:in `fetch'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/connection_adapters/schema_cache.rb:80:in `primary_keys'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/attribute_methods/primary_key.rb:96:in `get_primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/attribute_methods/primary_key.rb:83:in `reset_primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/attribute_methods/primary_key.rb:71:in `primary_key'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.2/app/models/good_job/lockable.rb:224:in `_advisory_lockable_column'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/relation/delegation.rb:108:in `public_send'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/relation/delegation.rb:108:in `block in method_missing'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/relation.rb:895:in `_scoping'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/relation.rb:433:in `scoping'", "~/.rvm/gems/ruby-3.1.3@rails71/bundler/gems/rails-d247f491a4e4/activerecord/lib/active_record/relation/delegation.rb:108:in `method_missing'", "~/.rvm/gems/ruby-3.1.3@rails71/gems/good_job-3.7.2/app/models/good_job/lockable.rb:101:in `block (2 levels) in