quantum-elixir / quantum-core

:watch: Cron-like job scheduler for Elixir
https://hexdocs.pm/quantum/
Apache License 2.0
2.33k stars 149 forks source link

Problem: Scheduler dies during scaling events in container orchestration environments #368

Closed doughsay closed 6 years ago

doughsay commented 6 years ago

We run our elixir apps in docker-compose for testing and development and in kubernetes for production. We also use libcluster to help our nodes discover one another. The new swarm-based version of quantum does't handle scaling in these environments very well.

I've reproduced a minimal test-case showing this behavior, please see my repo here: https://github.com/peek-travel/quantum_swarm If you have docker + compose installed it should be really easy to reproduce the error.

The setup is pretty simple: one global scheduler with one job that just prints "PING!" to stdout once every 3 seconds.

The error I get when scaling up the number of nodes is this:

19:26:11.646 [error] GenServer QuantumSwarm.Scheduler.ExecutorSupervisor terminating
** (FunctionClauseError) no function clause matching in Quantum.ExecutionBroadcaster.add_job_to_state/2
    (quantum) lib/quantum/execution_broadcaster.ex:271: Quantum.ExecutionBroadcaster.add_job_to_state({~N[2018-08-24 19:26:12], [%Quantum.Job{name: #Reference<0.1786720193.3425435649.14848>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 19:26:10], timer: nil})
    (elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3
    (quantum) lib/quantum/execution_broadcaster.ex:241: Quantum.ExecutionBroadcaster.handle_cast/2
    (gen_stage) lib/gen_stage.ex:2039: GenStage.noreply_callback/3
    (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4
    (stdlib) gen_server.erl:711: :gen_server.handle_msg/6
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: {:DOWN, #Reference<0.1786720193.3425435656.15142>, :process, #PID<32743.1366.0>, {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 19:26:12], [%Quantum.Job{name: #Reference<0.1786720193.3425435649.14848>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 19:26:10], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}}
State: %ConsumerSupervisor{args: %Quantum.ExecutorSupervisor.InitOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, execution_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.ExecutionBroadcaster}, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}, children: %{}, max_restarts: 3, max_seconds: 5, mod: Quantum.ExecutorSupervisor, name: QuantumSwarm.Scheduler.ExecutorSupervisor, producers: %{}, restarting: 0, restarts: [], strategy: :one_for_one, template: {Quantum.Executor, {Quantum.Executor, :start_link, [%Quantum.Executor.StartOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}]}, :temporary, 5000, :worker, [Quantum.Executor]}}

Please have a look at the repo and let me know if you have any questions about what I did to set it up. I'll also be trying to look into the error above and see if I can figure out what's going on. If I do, I'll send a PR!

It should also be noted that quantum 2.2.7 does not have this bug, but has others, like while scaling up the number of nodes you might run the same job multiple times or when scaling down you might miss some. This isn't such a big deal in our case, so we are reverting to 2.2.7 until we can get to the bottom of the above error.

maennchen commented 6 years ago

@doughsay Thanks for this detailed report! Could you help me with enabling debug logging and paste the log here?

doughsay commented 6 years ago

@maennchen debug logging is enabled in my example repo, so following the steps in my readme should give you detailed output. It's very noisy, but here's an example output:

steps taken:

  1. run docker-compose up --build (runs one node, PING messages show up every 3 seconds as expected)
  2. run docker-compose up -d --scale web=3 (boots up 2 more nodes, usually causes scheduler to die, PING messages stop)
``` web_1 | 20:43:32.182 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:init] started web_1 | 20:43:37.190 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:cluster_wait] joining cluster.. web_1 | 20:43:37.190 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:cluster_wait] no connected nodes, proceeding without sync web_1 | 20:43:37.190 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_1 | 20:43:37.190 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] starting QuantumSwarm.Scheduler.TaskRegistry on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:37.190 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] started QuantumSwarm.Scheduler.TaskRegistry on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:37.191 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<0.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:43:37.191 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] starting QuantumSwarm.Scheduler.JobBroadcaster on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:37.191 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.JobBroadcaster] Loading Initial Jobs from Config web_1 | 20:43:37.191 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] started QuantumSwarm.Scheduler.JobBroadcaster on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:37.191 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:43:37.192 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] starting QuantumSwarm.Scheduler.ExecutionBroadcaster on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:37.192 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Unknown last execution time, using now web_1 | 20:43:37.192 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] started QuantumSwarm.Scheduler.ExecutionBroadcaster on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:37.193 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering #PID<0.1362.0> as :"Elixir.#Reference<0.1546560186.1826357250.128492>", with metadata %{} web_1 | 20:43:37.193 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] add_meta {nil, true} to #PID<0.1362.0> web_1 | 20:43:37.194 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Adding job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:37.194 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460744561 (2018-08-24T20:43:39) web_1 | 20:43:37.208 [info] Running QuantumSwarmWeb.Endpoint with Cowboy using http://:::80 web_1 | 20:43:39.001 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:39.001 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460741561 (2018-08-24T20:43:42) web_1 | 20:43:39.001 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:39.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:39.002 [info] PING! web_1 | 20:43:39.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_1 | 20:43:41.966 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:41.966 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460738561 (2018-08-24T20:43:45) web_1 | 20:43:41.966 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:41.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:41.967 [info] PING! web_1 | 20:43:41.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_1 | 20:43:44.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:44.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460735561 (2018-08-24T20:43:48) web_1 | 20:43:44.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:44.968 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:44.968 [info] PING! web_1 | 20:43:44.968 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_1 | 20:43:47.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:47.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460732561 (2018-08-24T20:43:51) web_1 | 20:43:47.967 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:47.968 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:47.968 [info] PING! web_1 | 20:43:47.968 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_1 | 20:43:50.994 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:50.995 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460729561 (2018-08-24T20:43:54) web_1 | 20:43:50.995 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:50.995 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:50.995 [info] PING! web_1 | 20:43:50.995 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_1 | 20:43:54.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:54.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460726561 (2018-08-24T20:43:57) web_1 | 20:43:54.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:54.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:54.002 [info] PING! web_1 | 20:43:54.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_3 | 20:43:56.457 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:init] started web_2 | 20:43:56.722 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:init] started web_1 | 20:43:57.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:57.002 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460723561 (2018-08-24T20:44:00) web_1 | 20:43:57.003 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Task for job #Reference<0.1546560186.1826357250.127660> started on node :"quantum_swarm_umbrella@172.18.0.2" web_1 | 20:43:57.003 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1546560186.1826357250.127660> web_1 | 20:43:57.003 [info] PING! web_1 | 20:43:57.003 [debug] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1546560186.1826357250.127660>, which yielded result: :ok web_1 | 20:43:57.223 [info] [libcluster:dns] connected to :"quantum_swarm_umbrella@172.18.0.3" web_1 | 20:43:57.224 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.224 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:cluster_wait] joining cluster.. web_1 | 20:43:57.224 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:cluster_wait] found connected nodes: [:"quantum_swarm_umbrella@172.18.0.3"] web_1 | 20:43:57.224 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:cluster_wait] selected sync node: quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.225 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:syncing] received registry from quantum_swarm_umbrella@172.18.0.3, merging.. web_1 | 20:43:57.225 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:syncing] local synchronization with quantum_swarm_umbrella@172.18.0.3 complete! web_1 | 20:43:57.225 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:resolve_pending_sync_requests] pending sync requests cleared web_1 | 20:43:57.227 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.3" is registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_1 | 20:43:57.227 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.TaskRegistry already registered on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:57.228 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.3" is registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<32743.2577347193.3974103045.118742>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:43:57.228 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.JobBroadcaster already registered on quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.224 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.225 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:cluster_wait] joining cluster.. web_3 | 20:43:57.225 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:cluster_wait] syncing with quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.226 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:awaiting_sync_ack] received sync acknowledgement from quantum_swarm_umbrella@172.18.0.2, syncing with remote registry web_3 | 20:43:57.226 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:sync_registry] local tracker is missing QuantumSwarm.Scheduler.ExecutionBroadcaster, adding to registry web_3 | 20:43:57.226 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:sync_registry] local tracker is missing :"Elixir.#Reference<0.1546560186.1826357250.128492>", adding to registry web_3 | 20:43:57.226 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:sync_registry] local tracker is missing QuantumSwarm.Scheduler.JobBroadcaster, adding to registry web_3 | 20:43:57.226 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:sync_registry] local tracker is missing QuantumSwarm.Scheduler.TaskRegistry, adding to registry web_3 | 20:43:57.226 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:awaiting_sync_ack] local synchronization with quantum_swarm_umbrella@172.18.0.2 complete! web_3 | 20:43:57.226 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:resolve_pending_sync_requests] pending sync requests cleared web_3 | 20:43:57.226 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_3 | 20:43:57.226 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] starting QuantumSwarm.Scheduler.TaskRegistry on remote node quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.227 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.TaskRegistry already registered to #PID<32727.1359.0> on quantum_swarm_umbrella@172.18.0.2, registering locally web_3 | 20:43:57.227 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<0.2577347193.3974103045.118742>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_3 | 20:43:57.228 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] starting QuantumSwarm.Scheduler.JobBroadcaster on remote node quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.228 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.JobBroadcaster already registered to #PID<32727.1360.0> on quantum_swarm_umbrella@172.18.0.2, registering locally web_3 | 20:43:57.229 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_3 | 20:43:57.229 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] found QuantumSwarm.Scheduler.ExecutionBroadcaster already registered on quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.229 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering #PID<0.1365.0> as :"Elixir.#Reference<0.2577347193.3974103042.119875>", with metadata %{} web_3 | 20:43:57.231 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] add_meta {nil, true} to #PID<0.1365.0> web_1 | 20:43:57.230 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.2577347193.3974103042.119875>" (#PID<32743.1365.0>) locally web_1 | 20:43:57.235 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32743.1365.0> web_1 | 20:43:57.253 [info] [libcluster:dns] connected to :"quantum_swarm_umbrella@172.18.0.4" web_2 | 20:43:57.263 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:57.264 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.18.0.4 web_1 | 20:43:57.265 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_topology_change] topology change (nodeup for quantum_swarm_umbrella@172.18.0.4) web_1 | 20:43:57.265 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_topology_change] #PID<0.1361.0> belongs on quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.266 [info] [:"quantum_swarm_umbrella@172.18.0.2"][Elixir.Quantum.ExecutionBroadcaster] Handing of state to other cluster node web_1 | 20:43:57.272 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_topology_change] QuantumSwarm.Scheduler.ExecutionBroadcaster has requested to be resumed web_3 | 20:43:57.275 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replica event: untrack #PID<32727.1361.0> web_1 | 20:43:57.282 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_topology_change] sending handoff for QuantumSwarm.Scheduler.ExecutionBroadcaster to quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.283 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_topology_change] topology change complete web_1 | 20:43:57.288 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<0.1361.0> with reason: :shutdown web_1 | web_3 | 20:43:57.298 [debug] [:"quantum_swarm_umbrella@172.18.0.3"][Elixir.Quantum.ExecutionBroadcaster] Unknown last execution time, using now web_3 | 20:43:57.298 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<32727.1361.0> with reason: :shutdown web_3 | web_3 | 20:43:57.304 [info] [:"quantum_swarm_umbrella@172.18.0.3"][Elixir.Quantum.ExecutionBroadcaster] Incorperating state from other cluster node web_1 | 20:43:57.321 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replicating registration for QuantumSwarm.Scheduler.ExecutionBroadcaster (#PID<32743.1440.0>) locally web_3 | 20:43:57.334 [error] GenServer QuantumSwarm.Scheduler.ExecutionBroadcaster terminating web_3 | ** (FunctionClauseError) no function clause matching in Quantum.ExecutionBroadcaster.add_job_to_state/2 web_3 | (quantum) lib/quantum/execution_broadcaster.ex:271: Quantum.ExecutionBroadcaster.add_job_to_state({~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<32727.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}) web_3 | (elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3 web_3 | (quantum) lib/quantum/execution_broadcaster.ex:241: Quantum.ExecutionBroadcaster.handle_cast/2 web_3 | (gen_stage) lib/gen_stage.ex:2039: GenStage.noreply_callback/3 web_3 | (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4 web_3 | (stdlib) gen_server.erl:711: :gen_server.handle_msg/6 web_3 | (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 web_3 | Last message: {:"$gen_cast", {:swarm, :end_handoff, {[{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<32727.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}], ~N[2018-08-24 20:43:58]}}} web_3 | State: %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil} web_3 | 20:43:57.338 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<0.1440.0> with reason: {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<32727.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]} web_3 | web_3 | 20:43:57.351 [error] GenServer QuantumSwarm.Scheduler.ExecutorSupervisor terminating web_3 | ** (FunctionClauseError) no function clause matching in Quantum.ExecutionBroadcaster.add_job_to_state/2 web_3 | (quantum) lib/quantum/execution_broadcaster.ex:271: Quantum.ExecutionBroadcaster.add_job_to_state({~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<32727.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}) web_3 | (elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3 web_3 | (quantum) lib/quantum/execution_broadcaster.ex:241: Quantum.ExecutionBroadcaster.handle_cast/2 web_3 | (gen_stage) lib/gen_stage.ex:2039: GenStage.noreply_callback/3 web_3 | (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4 web_3 | (stdlib) gen_server.erl:711: :gen_server.handle_msg/6 web_3 | (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 web_3 | Last message: {:DOWN, #Reference<0.2577347193.3974103042.120026>, :process, #PID<0.1440.0>, {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<32727.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}} web_3 | State: %ConsumerSupervisor{args: %Quantum.ExecutorSupervisor.InitOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, execution_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.ExecutionBroadcaster}, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}, children: %{}, max_restarts: 3, max_seconds: 5, mod: Quantum.ExecutorSupervisor, name: QuantumSwarm.Scheduler.ExecutorSupervisor, producers: %{}, restarting: 0, restarts: [], strategy: :one_for_one, template: {Quantum.Executor, {Quantum.Executor, :start_link, [%Quantum.Executor.StartOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}]}, :temporary, 5000, :worker, [Quantum.Executor]}} web_3 | 20:43:57.353 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<0.1440.0> with reason: :noproc web_3 | web_3 | 20:43:57.356 [error] GenServer QuantumSwarm.Scheduler.ExecutorSupervisor terminating web_3 | ** (stop) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started web_3 | Last message: {:DOWN, #Reference<0.2577347193.3974103042.120043>, :process, #PID<0.1440.0>, :noproc} web_3 | State: %ConsumerSupervisor{args: %Quantum.ExecutorSupervisor.InitOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, execution_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.ExecutionBroadcaster}, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}, children: %{}, max_restarts: 3, max_seconds: 5, mod: Quantum.ExecutorSupervisor, name: QuantumSwarm.Scheduler.ExecutorSupervisor, producers: %{}, restarting: 0, restarts: [], strategy: :one_for_one, template: {Quantum.Executor, {Quantum.Executor, :start_link, [%Quantum.Executor.StartOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}]}, :temporary, 5000, :worker, [Quantum.Executor]}} web_3 | 20:43:57.356 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_monitor] QuantumSwarm.Scheduler.ExecutionBroadcaster is down: {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<32727.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]} web_3 | 20:43:57.359 [info] [libcluster:dns] connected to :"quantum_swarm_umbrella@172.18.0.4" web_1 | 20:43:57.368 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_monitor] QuantumSwarm.Scheduler.ExecutionBroadcaster is down: {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<0.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]} web_1 | 20:43:57.370 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<32743.1440.0> with reason: {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<0.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]} web_1 | web_1 | 20:43:57.375 [error] GenServer QuantumSwarm.Scheduler.ExecutorSupervisor terminating web_1 | ** (FunctionClauseError) no function clause matching in Quantum.ExecutionBroadcaster.add_job_to_state/2 web_1 | (quantum) lib/quantum/execution_broadcaster.ex:271: Quantum.ExecutionBroadcaster.add_job_to_state({~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<0.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}) web_1 | (elixir) lib/enum.ex:1925: Enum."-reduce/3-lists^foldl/2-0-"/3 web_1 | (quantum) lib/quantum/execution_broadcaster.ex:241: Quantum.ExecutionBroadcaster.handle_cast/2 web_1 | (gen_stage) lib/gen_stage.ex:2039: GenStage.noreply_callback/3 web_1 | (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4 web_1 | (stdlib) gen_server.erl:711: :gen_server.handle_msg/6 web_1 | (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 web_1 | Last message: {:DOWN, #Reference<0.1546560186.1826357250.128899>, :process, #PID<32743.1440.0>, {:function_clause, [{Quantum.ExecutionBroadcaster, :add_job_to_state, [{~N[2018-08-24 20:44:00], [%Quantum.Job{name: #Reference<0.1546560186.1826357250.127660>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}]}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 20:43:57.298551], timer: nil}], [file: 'lib/quantum/execution_broadcaster.ex', line: 271]}, {Enum, :"-reduce/3-lists^foldl/2-0-", 3, [file: 'lib/enum.ex', line: 1925]}, {Quantum.ExecutionBroadcaster, :handle_cast, 2, [file: 'lib/quantum/execution_broadcaster.ex', line: 241]}, {GenStage, :noreply_callback, 3, [file: 'lib/gen_stage.ex', line: 2039]}, {:gen_server, :try_dispatch, 4, [file: 'gen_server.erl', line: 637]}, {:gen_server, :handle_msg, 6, [file: 'gen_server.erl', line: 711]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}} web_1 | State: %ConsumerSupervisor{args: %Quantum.ExecutorSupervisor.InitOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, execution_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.ExecutionBroadcaster}, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}, children: %{}, max_restarts: 3, max_seconds: 5, mod: Quantum.ExecutorSupervisor, name: QuantumSwarm.Scheduler.ExecutorSupervisor, producers: %{}, restarting: 0, restarts: [], strategy: :one_for_one, template: {Quantum.Executor, {Quantum.Executor, :start_link, [%Quantum.Executor.StartOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}]}, :temporary, 5000, :worker, [Quantum.Executor]}} web_1 | 20:43:57.380 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replica event: untrack #PID<32743.1440.0> web_1 | 20:43:57.380 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_monitor] :"Elixir.#Reference<0.2577347193.3974103042.119875>" is down: :shutdown web_2 | 20:43:57.370 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.389 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replica event: untrack #PID<32743.1365.0> web_3 | 20:43:57.380 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.18.0.4 web_3 | 20:43:57.380 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_topology_change] topology change (nodeup for quantum_swarm_umbrella@172.18.0.4) web_3 | 20:43:57.381 [info] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_topology_change] topology change complete web_3 | 20:43:57.381 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replica event: untrack #PID<0.1440.0> web_3 | 20:43:57.381 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_monitor] :"Elixir.#Reference<0.2577347193.3974103042.119875>" is down: :shutdown web_3 | 20:43:57.389 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_3 | 20:43:57.390 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] starting QuantumSwarm.Scheduler.TaskRegistry on remote node quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.391 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replica event: untrack #PID<0.1365.0> web_1 | 20:43:57.410 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.3" is registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_1 | 20:43:57.410 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.TaskRegistry already registered on quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.411 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.TaskRegistry already registered to #PID<32727.1359.0> on quantum_swarm_umbrella@172.18.0.2, registering locally web_1 | 20:43:57.419 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.3" is registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<32743.2577347193.3974103045.118837>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:43:57.420 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.JobBroadcaster already registered on quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.415 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<0.2577347193.3974103045.118837>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_3 | 20:43:57.415 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] starting QuantumSwarm.Scheduler.JobBroadcaster on remote node quantum_swarm_umbrella@172.18.0.2 web_3 | 20:43:57.443 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.JobBroadcaster already registered to #PID<32727.1360.0> on quantum_swarm_umbrella@172.18.0.2, registering locally web_1 | 20:43:57.449 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_monitor] :"Elixir.#Reference<0.1546560186.1826357250.128492>" is down: :shutdown web_3 | 20:43:57.445 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_3 | 20:43:57.449 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] starting QuantumSwarm.Scheduler.ExecutionBroadcaster on quantum_swarm_umbrella@172.18.0.3 web_3 | 20:43:57.450 [debug] [:"quantum_swarm_umbrella@172.18.0.3"][Elixir.Quantum.ExecutionBroadcaster] Unknown last execution time, using now web_3 | 20:43:57.450 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] started QuantumSwarm.Scheduler.ExecutionBroadcaster on quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.458 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_1 | 20:43:57.458 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.TaskRegistry already registered on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:57.458 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replicating registration for QuantumSwarm.Scheduler.ExecutionBroadcaster (#PID<32743.1461.0>) locally web_1 | 20:43:57.458 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<0.1546560186.1826357256.127796>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:43:57.458 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.JobBroadcaster already registered on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:43:57.459 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:43:57.459 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] starting QuantumSwarm.Scheduler.ExecutionBroadcaster on remote node quantum_swarm_umbrella@172.18.0.3 web_1 | 20:43:57.460 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replica event: untrack #PID<0.1362.0> web_3 | 20:43:57.458 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_monitor] :"Elixir.#Reference<0.1546560186.1826357250.128492>" is down: :shutdown web_3 | 20:43:57.460 [info] Running QuantumSwarmWeb.Endpoint with Cowboy using http://:::80 web_1 | 20:43:57.478 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.2577347193.3974103044.118519>" (#PID<32743.1508.0>) locally web_3 | 20:43:57.476 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replica event: untrack #PID<32727.1362.0> web_3 | 20:43:57.476 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] registering #PID<0.1508.0> as :"Elixir.#Reference<0.2577347193.3974103044.118519>", with metadata %{} web_1 | 20:43:57.505 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.ExecutionBroadcaster already registered to #PID<32743.1461.0> on quantum_swarm_umbrella@172.18.0.3, registering locally web_1 | 20:43:57.506 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32743.1508.0> web_1 | 20:43:57.507 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] registering #PID<0.1582.0> as :"Elixir.#Reference<0.1546560186.1826357256.127838>", with metadata %{} web_3 | 20:43:57.489 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.2" is registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_3 | 20:43:57.489 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] found QuantumSwarm.Scheduler.ExecutionBroadcaster already registered on quantum_swarm_umbrella@172.18.0.3 web_3 | 20:43:57.490 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] add_meta {nil, true} to #PID<0.1508.0> web_1 | 20:43:57.516 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] add_meta {nil, true} to #PID<0.1582.0> web_3 | 20:43:57.515 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.1546560186.1826357256.127838>" (#PID<32727.1582.0>) locally web_3 | 20:43:57.518 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32727.1582.0> web_2 | 20:44:01.724 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:cluster_wait] joining cluster.. web_2 | 20:44:01.724 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:cluster_wait] found connected nodes: [:"quantum_swarm_umbrella@172.18.0.3", :"quantum_swarm_umbrella@172.18.0.2"] web_2 | 20:44:01.724 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:cluster_wait] selected sync node: quantum_swarm_umbrella@172.18.0.2 web_1 | 20:44:01.725 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_cast] received sync request from quantum_swarm_umbrella@172.18.0.4 web_2 | 20:44:01.729 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:syncing] received registry from quantum_swarm_umbrella@172.18.0.2, merging.. web_2 | 20:44:01.729 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:sync_registry] local tracker is missing :"Elixir.#Reference<0.2577347193.3974103044.118519>", adding to registry web_2 | 20:44:01.729 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:sync_registry] local tracker is missing QuantumSwarm.Scheduler.ExecutionBroadcaster, adding to registry web_2 | 20:44:01.729 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:sync_registry] local tracker is missing QuantumSwarm.Scheduler.JobBroadcaster, adding to registry web_2 | 20:44:01.729 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:sync_registry] local tracker is missing QuantumSwarm.Scheduler.TaskRegistry, adding to registry web_2 | 20:44:01.729 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:sync_registry] local tracker is missing :"Elixir.#Reference<0.1546560186.1826357256.127838>", adding to registry web_2 | 20:44:01.730 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:syncing] local synchronization with quantum_swarm_umbrella@172.18.0.2 complete! web_2 | 20:44:01.730 [info] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:resolve_pending_sync_requests] pending sync requests cleared web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_call] registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:do_track] starting QuantumSwarm.Scheduler.TaskRegistry on remote node quantum_swarm_umbrella@172.18.0.2 web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: untrack #PID<32727.1361.0> web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: untrack #PID<32728.1440.0> web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: untrack #PID<32728.1365.0> web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: untrack #PID<32728.1365.0> web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: untrack #PID<32727.1362.0> web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replicating registration for QuantumSwarm.Scheduler.ExecutionBroadcaster (#PID<32728.1461.0>) locally web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: untrack #PID<32727.1362.0> web_2 | 20:44:01.730 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.2577347193.3974103044.118519>" (#PID<32728.1508.0>) locally web_2 | 20:44:01.731 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32728.1508.0> web_2 | 20:44:01.731 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.1546560186.1826357256.127838>" (#PID<32727.1582.0>) locally web_2 | 20:44:01.731 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32727.1582.0> web_1 | 20:44:01.730 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:awaiting_sync_ack] received sync acknowledgement from quantum_swarm_umbrella@172.18.0.4, syncing with remote registry web_1 | 20:44:01.731 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:awaiting_sync_ack] local synchronization with quantum_swarm_umbrella@172.18.0.4 complete! web_1 | 20:44:01.731 [info] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:resolve_pending_sync_requests] pending sync requests cleared web_1 | 20:44:01.731 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.4" is registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}] web_1 | 20:44:01.731 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.TaskRegistry already registered on quantum_swarm_umbrella@172.18.0.2 web_1 | 20:44:01.733 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.4" is registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<32744.562989018.3974365191.11760>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_1 | 20:44:01.733 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:do_track] found QuantumSwarm.Scheduler.JobBroadcaster already registered on quantum_swarm_umbrella@172.18.0.2 web_3 | 20:44:01.734 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_call] :"quantum_swarm_umbrella@172.18.0.4" is registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_3 | 20:44:01.734 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:do_track] found QuantumSwarm.Scheduler.ExecutionBroadcaster already registered on quantum_swarm_umbrella@172.18.0.3 web_2 | 20:44:01.731 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.TaskRegistry already registered to #PID<32727.1359.0> on quantum_swarm_umbrella@172.18.0.2, registering locally web_2 | 20:44:01.732 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_call] registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<0.562989018.3974365191.11760>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_2 | 20:44:01.732 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:do_track] starting QuantumSwarm.Scheduler.JobBroadcaster on remote node quantum_swarm_umbrella@172.18.0.2 web_2 | 20:44:01.733 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.JobBroadcaster already registered to #PID<32727.1360.0> on quantum_swarm_umbrella@172.18.0.2, registering locally web_2 | 20:44:01.733 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_call] registering QuantumSwarm.Scheduler.ExecutionBroadcaster as process started by Elixir.Quantum.ExecutionBroadcaster.start_link/1 with args [%Quantum.ExecutionBroadcaster.StartOpts{debug_logging: true, job_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.JobBroadcaster}, name: QuantumSwarm.Scheduler.ExecutionBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}] web_2 | 20:44:01.734 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:do_track] starting QuantumSwarm.Scheduler.ExecutionBroadcaster on remote node quantum_swarm_umbrella@172.18.0.3 web_2 | 20:44:01.735 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:start_pid_remotely] QuantumSwarm.Scheduler.ExecutionBroadcaster already registered to #PID<32728.1461.0> on quantum_swarm_umbrella@172.18.0.3, registering locally web_2 | 20:44:01.740 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_call] registering #PID<0.1371.0> as :"Elixir.#Reference<0.562989018.3974365192.12386>", with metadata %{} web_2 | 20:44:01.742 [debug] [swarm on quantum_swarm_umbrella@172.18.0.4] [tracker:handle_call] add_meta {nil, true} to #PID<0.1371.0> web_1 | 20:44:01.742 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.562989018.3974365192.12386>" (#PID<32744.1371.0>) locally web_1 | 20:44:01.745 [debug] [swarm on quantum_swarm_umbrella@172.18.0.2] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32744.1371.0> web_3 | 20:44:01.742 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replicating registration for :"Elixir.#Reference<0.562989018.3974365192.12386>" (#PID<32738.1371.0>) locally web_3 | 20:44:01.745 [debug] [swarm on quantum_swarm_umbrella@172.18.0.3] [tracker:handle_replica_event] replica event: add_meta {nil, true} to #PID<32738.1371.0> web_2 | 20:44:01.782 [info] Running QuantumSwarmWeb.Endpoint with Cowboy using http://:::80 ```
maennchen commented 6 years ago

@doughsay Thanks, this confirms my assumption. The Handoff of the ExecutionBroadcaster is broken. I'll have a look and will create a PR. Thanks again for the detailed report!

maennchen commented 6 years ago

@doughsay I think I found and solved the issue. I'm a little low on time though and that's why I couldn't test it. Would you mind testing again with #369 ?

doughsay commented 6 years ago

@maennchen thanks a lot for looking into this. Your branch is definitely better, but I was still able to cause a few error states. It's really hard to reproduce these because it just involves randomly scaling up and down to various cluster sizes.

I was able to cause two bad end states after scaling either up or down:

  1. Scheduler no longer running, i.e. no jobs running
  2. Two schedulers running, i.e. every 3 seconds, my job fired twice

Two errors I saw were:

I think I quickly scaled from 10 to 1 nodes to cause this one

22:27:47.695 [error] GenServer QuantumSwarm.Scheduler.ExecutorSupervisor terminating
** (stop) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
Last message: {:DOWN, #Reference<0.2651763341.768868358.35813>, :process, #PID<32727.1361.0>, :noproc}
State: %ConsumerSupervisor{args: %Quantum.ExecutorSupervisor.InitOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, execution_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.ExecutionBroadcaster}, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}, children: %{}, max_restarts: 3, max_seconds: 5, mod: Quantum.ExecutorSupervisor, name: QuantumSwarm.Scheduler.ExecutorSupervisor, producers: %{}, restarting: 0, restarts: [], strategy: :one_for_one, template: {Quantum.Executor, {Quantum.Executor, :start_link, [%Quantum.Executor.StartOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}]}, :temporary, 5000, :worker, [Quantum.Executor]}}

I think I quickly scaled from 1 to 10 nodes to cause this one:

22:32:26.455 [error] GenServer QuantumSwarm.Scheduler.ExecutionBroadcaster terminating
** (FunctionClauseError) no function clause matching in Quantum.ExecutionBroadcaster.handle_info/2
    (quantum) lib/quantum/execution_broadcaster.ex:139: Quantum.ExecutionBroadcaster.handle_info({:swarm, :die}, %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 22:32:26.264193], timer: nil})
    (gen_stage) lib/gen_stage.ex:2022: GenStage.noreply_callback/3
    (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4
    (stdlib) gen_server.erl:711: :gen_server.handle_msg/6
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: {:swarm, :die}
State: %Quantum.ExecutionBroadcaster.State{debug_logging: true, jobs: [], scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop, time: ~N[2018-08-24 22:32:26.264193], timer: nil}

That being said, I don't think I care that much about being able to spawn 10 nodes all at once really quickly, or shut down 10 nodes really quickly. We do slow rolling deploys in production where we wait 30 seconds between each 1 new node coming up and 1 old node shutting down.

So I tried going slowly and scaling 1 node up at a time and reaching 10 nodes total, and then I reversed and shutdown 1 node at a time slowly. I was unfortunately still able to produce an error that left the scheduler not running. Below I've pasted the logs of a 6 node cluster running and 1 node shutting down, leaving the scheduler not running and no jobs running at the end:

https://pastebin.com/mZdCh9Qp (I wasn't able to paste the logs as they were too big for github)

doughsay commented 6 years ago

Ok, after much more tinkering and reading of the logs closely, it appears that sometimes during a node shutdown, it fails to pass off the state to another running node. The ExecutionBroadcaster does look like it eventually gets restarted, but without any jobs registered. Should it not maybe re-add the jobs from the static config in cases such as those? It may also just be a coincidence, but I see if most often when going from 6 nodes to 5 by shutting one down.

See the pastebin logs from my previous comment; is my understanding of it correct that the Scheduler does eventually restart, but it's empty and has no jobs registered?

Overall though, your branch is much more stable and works 90% of the time. I'm just worried about the rolling deployment strategy we use which shuts down old nodes causing this scenario to happen.

maennchen commented 6 years ago

@doughsay It's clear that this must work. The hard thing is that there's no testing strategy to ensure that it is really working.

I'm going to take a look again and hopefully fix everything.

I'm going to be busy over the weekend, therefore I'll have time to fix it monday the earliest.

maennchen commented 6 years ago

I also think that your error makes sense. The handle_info callback of the ExecutionBroadcaster does not check if the timer of the state is nil or not. Therefor Process.cancel_timer/1 is called with nil sometimes.

This could also lead to the ExecutorSupervisor crashing since the ExecutionBroadcaster which is its dependency crashed.

maennchen commented 6 years ago

This one error was actually so easy to fix that I quickly implemented it. The PR is updated. You're welcome to test again. I'm going to have a look myself as well as soon as I have time again.

doughsay commented 6 years ago

Thank you @maennchen! I'll have a look again on Monday as well.

doughsay commented 6 years ago

I may have some more log output to help with this issue.

I had a docker-compose cluster of 2 nodes running and working (web_1 and web_2), then I attempted to scale to 3 nodes. The new node (web_3) timed out during boot and crashed. It left the cluster in a bad state and quantum was no longer running jobs. Here's the output:

web_2  | 23:40:45.001 [debug] [:"quantum_swarm_umbrella@172.23.0.3"][Elixir.Quantum.Executor] Task for job #Reference<32733.1346946828.911998984.31975> started on node :"quantum_swarm_umbrella@172.23.0.2"
web_1  | 23:40:45.003 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1346946828.911998984.31975>
web_1  | 23:40:45.003 [info] PING!
web_1  | 23:40:45.003 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1346946828.911998984.31975>, which yielded result: :ok
web_1  | 23:40:48.000 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1346946828.911998984.31975>
web_1  | 23:40:48.000 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460538276 (2018-08-27T23:40:51)
web_2  | 23:40:48.011 [debug] [:"quantum_swarm_umbrella@172.23.0.3"][Elixir.Quantum.Executor] Task for job #Reference<32733.1346946828.911998984.31975> started on node :"quantum_swarm_umbrella@172.23.0.3"
web_2  | 23:40:48.012 [debug] [:"quantum_swarm_umbrella@172.23.0.3"][Elixir.Quantum.Executor] Execute started for job #Reference<32733.1346946828.911998984.31975>
web_2  | 23:40:48.012 [info] PING!
web_2  | 23:40:48.013 [debug] [:"quantum_swarm_umbrella@172.23.0.3"][Elixir.Quantum.Executor] Execution ended for job #Reference<32733.1346946828.911998984.31975>, which yielded result: :ok
web_2  | 23:40:48.069 [warn] [libcluster:dns] unable to connect to :"quantum_swarm_umbrella@172.23.0.4"
web_1  | 23:40:48.078 [warn] [libcluster:dns] unable to connect to :"quantum_swarm_umbrella@172.23.0.4"
web_3  | 23:40:49.403 [info] [swarm on quantum_swarm_umbrella@172.23.0.4] [tracker:init] started
web_1  | 23:40:51.000 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.ExecutionBroadcaster] Scheduling job for execution #Reference<0.1346946828.911998984.31975>
web_1  | 23:40:51.001 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.ExecutionBroadcaster] Continuing Execution Broadcasting at -576460535276 (2018-08-27T23:40:54)
web_2  | 23:40:51.001 [debug] [:"quantum_swarm_umbrella@172.23.0.3"][Elixir.Quantum.Executor] Task for job #Reference<32733.1346946828.911998984.31975> started on node :"quantum_swarm_umbrella@172.23.0.2"
web_1  | 23:40:51.003 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.Executor] Execute started for job #Reference<0.1346946828.911998984.31975>
web_1  | 23:40:51.003 [info] PING!
web_1  | 23:40:51.003 [debug] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.Executor] Execution ended for job #Reference<0.1346946828.911998984.31975>, which yielded result: :ok
web_2  | 23:40:53.049 [info] [libcluster:dns] connected to :"quantum_swarm_umbrella@172.23.0.4"
web_2  | 23:40:53.051 [info] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.23.0.4
web_2  | 23:40:53.052 [debug] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:handle_topology_change] topology change (nodeup for quantum_swarm_umbrella@172.23.0.4)
web_2  | 23:40:53.052 [info] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:handle_topology_change] topology change complete
web_3  | 23:40:53.051 [info] [swarm on quantum_swarm_umbrella@172.23.0.4] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.23.0.3
web_1  | 23:40:53.060 [info] [libcluster:dns] connected to :"quantum_swarm_umbrella@172.23.0.4"
web_3  | 23:40:53.061 [info] [swarm on quantum_swarm_umbrella@172.23.0.4] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.23.0.2
web_1  | 23:40:53.061 [info] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:ensure_swarm_started_on_remote_node] nodeup quantum_swarm_umbrella@172.23.0.4
web_1  | 23:40:53.062 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] topology change (nodeup for quantum_swarm_umbrella@172.23.0.4)
web_1  | 23:40:53.062 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] #PID<0.1384.0> belongs on quantum_swarm_umbrella@172.23.0.4
web_1  | 23:40:53.062 [info] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.TaskRegistry] Handing of state to other cluster node
web_1  | 23:40:53.062 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] QuantumSwarm.Scheduler.TaskRegistry has requested to be resumed
web_2  | 23:40:53.063 [debug] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:handle_replica_event] replica event: untrack #PID<32733.1384.0>
web_2  | 23:40:53.070 [debug] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:handle_replica_event] replica event: untrack #PID<32733.1385.0>
web_2  | 23:40:53.075 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<32733.1386.0> with reason: :shutdown
web_2  |
web_2  | 23:40:53.078 [debug] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:handle_monitor] QuantumSwarm.Scheduler.ExecutionBroadcaster is down: :shutdown
web_1  | 23:40:53.069 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] sending handoff for QuantumSwarm.Scheduler.TaskRegistry to quantum_swarm_umbrella@172.23.0.4
web_2  | 23:40:53.079 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<32733.1386.0> with reason: :noproc
web_2  |
web_2  | 23:40:53.079 [error] GenServer QuantumSwarm.Scheduler.ExecutorSupervisor terminating
web_2  | ** (stop) no process: the process is not alive or there's no process currently associated with the given name, possibly because its application isn't started
web_2  | Last message: {:DOWN, #Reference<0.3736789155.3865575430.239252>, :process, #PID<32733.1386.0>, :noproc}
web_2  | State: %ConsumerSupervisor{args: %Quantum.ExecutorSupervisor.InitOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, execution_broadcaster_reference: {:via, :swarm, QuantumSwarm.Scheduler.ExecutionBroadcaster}, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}, children: %{}, max_restarts: 3, max_seconds: 5, mod: Quantum.ExecutorSupervisor, name: QuantumSwarm.Scheduler.ExecutorSupervisor, producers: %{}, restarting: 0, restarts: [], strategy: :one_for_one, template: {Quantum.Executor, {Quantum.Executor, :start_link, [%Quantum.Executor.StartOpts{cluster_task_supervisor_registry_reference: QuantumSwarm.Scheduler.ClusterTaskSupervisorRegistry, debug_logging: true, task_registry_reference: {:via, :swarm, QuantumSwarm.Scheduler.TaskRegistry}, task_supervisor_reference: QuantumSwarm.Scheduler.Task.Supervisor}]}, :temporary, 5000, :worker, [Quantum.Executor]}}
web_2  | 23:40:53.080 [debug] [swarm on quantum_swarm_umbrella@172.23.0.3] [tracker:handle_replica_event] replica event: untrack #PID<32733.1386.0>
web_2  | 23:40:53.080 [debug] [:"quantum_swarm_umbrella@172.23.0.3"][Elixir.Quantum.ExecutionBroadcaster] Unknown last execution time, using now
web_1  | 23:40:53.069 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] #PID<0.1385.0> belongs on quantum_swarm_umbrella@172.23.0.4
web_1  | 23:40:53.069 [info] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.JobBroadcaster] Handing of state to other cluster node
web_1  | 23:40:53.069 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] QuantumSwarm.Scheduler.JobBroadcaster has requested to be resumed
web_1  | 23:40:53.074 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] sending handoff for QuantumSwarm.Scheduler.JobBroadcaster to quantum_swarm_umbrella@172.23.0.4
web_1  | 23:40:53.074 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] #PID<0.1386.0> belongs on quantum_swarm_umbrella@172.23.0.3
web_1  | 23:40:53.074 [info] [:"quantum_swarm_umbrella@172.23.0.2"][Elixir.Quantum.ExecutionBroadcaster] Handing of state to other cluster node
web_1  | 23:40:53.074 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutionBroadcaster is stopping after receiving cancel from producer #PID<0.1385.0> with reason: :shutdown
web_1  |
web_1  | 23:40:53.074 [info] GenStage consumer QuantumSwarm.Scheduler.ExecutorSupervisor is stopping after receiving cancel from producer #PID<0.1386.0> with reason: :shutdown
web_1  |
web_1  | 23:40:53.074 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] QuantumSwarm.Scheduler.ExecutionBroadcaster has requested to be resumed
web_1  | 23:40:53.079 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] sending handoff for QuantumSwarm.Scheduler.ExecutionBroadcaster to quantum_swarm_umbrella@172.23.0.3
web_1  | 23:40:53.079 [info] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] topology change complete
web_1  | 23:40:53.080 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_replica_event] replica event: untrack #PID<0.1386.0>
web_1  | 23:40:53.083 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_monitor] :"Elixir.#Reference<0.1346946828.911998984.32651>" is down: :shutdown
web_1  | 23:40:53.085 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.TaskRegistry as process started by Elixir.Quantum.TaskRegistry.start_link/1 with args [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}]
web_1  | 23:40:53.085 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:do_track] starting QuantumSwarm.Scheduler.TaskRegistry on remote node quantum_swarm_umbrella@172.23.0.4
web_3  | 23:40:54.371 [info] [swarm on quantum_swarm_umbrella@172.23.0.4] [tracker:cluster_wait] joining cluster..
web_3  | 23:40:54.371 [info] [swarm on quantum_swarm_umbrella@172.23.0.4] [tracker:cluster_wait] found connected nodes: [:"quantum_swarm_umbrella@172.23.0.2", :"quantum_swarm_umbrella@172.23.0.3"]
web_3  | 23:40:54.371 [info] [swarm on quantum_swarm_umbrella@172.23.0.4] [tracker:cluster_wait] selected sync node: quantum_swarm_umbrella@172.23.0.3
web_3  | 23:41:04.382 [info] Application quantum_swarm exited: QuantumSwarm.Application.start(:normal, []) returned an error: shutdown: failed to start child: QuantumSwarm.Scheduler
web_3  |     ** (EXIT) exited in: :gen_statem.call(Swarm.Tracker, {:track, QuantumSwarm.Scheduler.TaskRegistry, %{mfa: {Quantum.TaskRegistry, :start_link, [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}]}}}, 15000)
web_3  |         ** (EXIT) time out
web_1  | 23:41:04.386 [error] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:start_pid_remotely] ** (exit) exited in: :gen_statem.call({Swarm.Tracker, :"quantum_swarm_umbrella@172.23.0.4"}, {:track, QuantumSwarm.Scheduler.TaskRegistry, %{mfa: {Quantum.TaskRegistry, :start_link, [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}]}}}, :infinity)
web_1  |     ** (EXIT) shutdown
web_1  |     (stdlib) gen.erl:177: :gen.do_call/4
web_1  |     (stdlib) gen_statem.erl:598: :gen_statem.call_dirty/4
web_1  |     (swarm) lib/swarm/tracker/tracker.ex:1115: Swarm.Tracker.start_pid_remotely/6
web_1  |     (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
web_1  |     (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
web_1  |
web_1  | 23:41:04.386 [warn] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:start_pid_remotely] failed to start QuantumSwarm.Scheduler.TaskRegistry on quantum_swarm_umbrella@172.23.0.4: {:shutdown, {:gen_statem, :call, [{Swarm.Tracker, :"quantum_swarm_umbrella@172.23.0.4"}, {:track, QuantumSwarm.Scheduler.TaskRegistry, %{mfa: {Quantum.TaskRegistry, :start_link, [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}]}}}, :infinity]}}
web_1  | 23:41:04.387 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_call] registering QuantumSwarm.Scheduler.JobBroadcaster as process started by Elixir.Quantum.JobBroadcaster.start_link/1 with args [%Quantum.JobBroadcaster.StartOpts{debug_logging: true, jobs: [%Quantum.Job{name: #Reference<0.1346946828.911998979.32608>, overlap: true, run_strategy: %Quantum.RunStrategy.Random{nodes: :cluster}, schedule: ~e[*/3 * * * * * *]e, state: :active, task: {QuantumSwarm.Pinger, :ping, []}, timezone: :utc}], name: QuantumSwarm.Scheduler.JobBroadcaster, scheduler: QuantumSwarm.Scheduler, storage: Quantum.Storage.Noop}]
web_1  | 23:41:04.387 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:do_track] starting QuantumSwarm.Scheduler.JobBroadcaster on remote node quantum_swarm_umbrella@172.23.0.4
web_1  | 23:41:04.388 [warn] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:start_pid_remotely] remote tracker on quantum_swarm_umbrella@172.23.0.4 went down during registration, retrying operation..
      ...message repeated many times...
web_1  | 23:41:05.418 [warn] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:start_pid_remotely] remote tracker on quantum_swarm_umbrella@172.23.0.4 went down during registration, retrying operation..
web_1  | 23:41:05.418 [warn] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:start_pid_remotely] failed to start QuantumSwarm.Scheduler.JobBroadcaster on quantum_swarm_umbrella@172.23.0.4: nodedown, retrying operation..
web_1  | 23:41:05.419 [info] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:nodedown] nodedown quantum_swarm_umbrella@172.23.0.4
web_1  | 23:41:05.419 [debug] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] topology change (nodedown for quantum_swarm_umbrella@172.23.0.4)
web_1  | 23:41:05.419 [info] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:handle_topology_change] topology change complete
web_3  | {"Kernel pid terminated",application_controller,"{application_start_failure,quantum_swarm,{{shutdown,{failed_to_start_child,'Elixir.QuantumSwarm.Scheduler',{timeout,{gen_statem,call,['Elixir.Swarm.Tracker',{track,'Elixir.QuantumSwarm.Scheduler.TaskRegistry',#{mfa => {'Elixir.Quantum.TaskRegistry',start_link,[#{'__struct__' => 'Elixir.Quantum.TaskRegistry.StartOpts',name => 'Elixir.QuantumSwarm.Scheduler.TaskRegistry'}]}}},15000]}}}},{'Elixir.QuantumSwarm.Application',start,[normal,[]]}}}"}
web_3  | Kernel pid terminated (application_controller) ({application_start_failure,quantum_swarm,{{shutdown,{failed_to_start_child,'Elixir.QuantumSwarm.Scheduler',{timeout,{gen_statem,call,['Elixir.Swarm.Trac
web_3  |
web_3  | Crash dump is being written to: erl_crash.dump...done
quantum_swarm_umbrella_web_3 exited with code 1
maennchen commented 6 years ago

@doughsay I believe that the error comes from swarm this time (the timeout). The QuantumSwarm.Scheduler.ExecutionBroadcaster stopping is not really a problem since it has the value restart: :permanent and will therefore connect again. This is also expected if the gen_stage consumers migrate to another place.

This part should however not happen:

web_3  | 23:41:04.382 [info] Application quantum_swarm exited: QuantumSwarm.Application.start(:normal, []) returned an error: shutdown: failed to start child: QuantumSwarm.Scheduler
web_3  |     ** (EXIT) exited in: :gen_statem.call(Swarm.Tracker, {:track, QuantumSwarm.Scheduler.TaskRegistry, %{mfa: {Quantum.TaskRegistry, :start_link, [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}]}}}, 15000)
web_3  |         ** (EXIT) time out
web_1  | 23:41:04.386 [error] [swarm on quantum_swarm_umbrella@172.23.0.2] [tracker:start_pid_remotely] ** (exit) exited in: :gen_statem.call({Swarm.Tracker, :"quantum_swarm_umbrella@172.23.0.4"}, {:track, QuantumSwarm.Scheduler.TaskRegistry, %{mfa: {Quantum.TaskRegistry, :start_link, [%Quantum.TaskRegistry.StartOpts{name: QuantumSwarm.Scheduler.TaskRegistry}]}}}, :infinity)
web_1  |     ** (EXIT) shutdown
web_1  |     (stdlib) gen.erl:177: :gen.do_call/4
web_1  |     (stdlib) gen_statem.erl:598: :gen_statem.call_dirty/4
web_1  |     (swarm) lib/swarm/tracker/tracker.ex:1115: Swarm.Tracker.start_pid_remotely/6
web_1  |     (elixir) lib/task/supervised.ex:89: Task.Supervised.do_apply/2
web_1  |     (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
maennchen commented 6 years ago

I will go ahead and get the PR merged and also open an issue with swarm.

maennchen commented 6 years ago

The latest issue with swarm is further tracked with #374, the rest will be released with #373