Closed danielschnee closed 2 months ago
Oh, that's a strange one! Let me look into it.
Do you get any errors before this, when Solid Queue starts, when registering any of the processes (dispatchers, workers, etc.)? You should get some lines like this:
SolidQueue-0.9.0 Register Supervisor (10.0ms) pid: 34424, hostname: "Rosas-Air-M2.localdomain", process_id: 6, name: "supervisor-e232055b690c3c5d3e63"
SolidQueue-0.9.0 Fail claimed jobs (3.6ms) job_ids: [], process_ids: []
SolidQueue-0.9.0 Started Supervisor (36.5ms) pid: 34424, hostname: "Rosas-Air-M2.localdomain", process_id: 6, name: "supervisor-e232055b690c3c5d3e63"
SolidQueue-0.9.0 Prune dead processes (19.1ms) size: 0
SolidQueue-0.9.0 Register Dispatcher (43.3ms) pid: 34425, hostname: "Rosas-Air-M2.localdomain", process_id: 7, name: "dispatcher-8c9a168af2f75adbde63"
SolidQueue-0.9.0 Started Dispatcher (44.5ms) pid: 34425, hostname: "Rosas-Air-M2.localdomain", process_id: 7, name: "dispatcher-8c9a168af2f75adbde63", polling_interval: 1, batch_size: 500, concurrency_maintenance_interval: 600
SolidQueue-0.9.0 Register Worker (42.5ms) pid: 34427, hostname: "Rosas-Air-M2.localdomain", process_id: 8, name: "worker-426c95702eccb5c40ffd"
SolidQueue-0.9.0 Started Worker (43.4ms) pid: 34427, hostname: "Rosas-Air-M2.localdomain", process_id: 8, name: "worker-426c95702eccb5c40ffd", polling_interval: 0.1, queues: "default", thread_pool_size: 5
SolidQueue-0.9.0 Register Worker (44.3ms) pid: 34426, hostname: "Rosas-Air-M2.localdomain", process_id: 9, name: "worker-cd0acd29813c42fb1214"
SolidQueue-0.9.0 Started Worker (45.2ms) pid: 34426, hostname: "Rosas-Air-M2.localdomain", process_id: 9, name: "worker-cd0acd29813c42fb1214", polling_interval: 0.1, queues: "background", thread_pool_size: 3
SolidQueue-0.9.0 Register Scheduler (48.1ms) pid: 34428, hostname: "Rosas-Air-M2.localdomain", process_id: 10, name: "scheduler-7d5a0693d55b7a6b85d0"
SolidQueue-0.9.0 Started Scheduler (53.9ms) pid: 34428, hostname: "Rosas-Air-M2.localdomain", process_id: 10, name: "scheduler-7d5a0693d55b7a6b85d0", recurring_schedule: ["periodic_store_result"]
I have a suspicion of what this might be about...
Local:
Running it with the puma plugin.
From the CI:
Thank you! That's very helpful 🙏
If that helps.
Locally its running fine:
On Prod it produces that error:
Huh, ok, that's unexpected and changed what I thought this was about 😅 😅 Did you get any errors of other kind before the lock errors started?
Ok. Found another thing, It happens after some time running.
Now i get the same error on local:
Aha! Now everything makes sense 😆 Ok, I'll get this fixed.
I get a lot of pessimistic lock errors. Something around every minute because of the heartbeat.
The error:
Backtrace:
Is there something i can do on my side?
Running:
What else do you need from me?