I noticed that the max_dead_jobs configuration is undocumented and might cause some issues if someone rely on the DeadSet queue to evaluate errors on their workers, as the actual limit is 100 and it can be easily be reached depending on the system size, causing dead workers to be discarded.
Also related to #170 as this config is being set at compilation time and is not overwritable.
From Verk.DeadSet
@max_dead_jobs Confex.get_env(:verk, :max_dead_jobs, 100)
# ...
"""
Optionally a redis connection can be specified
"""
@spec add(%Job{}, integer, GenServer.server) :: :ok | {:error, Redix.Error.t}
def add(job, timestamp, redis \\ Verk.Redis) do
case Redix.pipeline(redis, [["ZADD", @dead_key, timestamp, Job.encode!(job)],
["ZREMRANGEBYSCORE", @dead_key, "-inf", timestamp - @timeout],
["ZREMRANGEBYRANK", @dead_key, 0, -@max_dead_jobs]]) do
{:ok, _} -> :ok
{:error, error} -> {:error, error}
end
end
Hello!
I noticed that the
max_dead_jobs
configuration is undocumented and might cause some issues if someone rely on the DeadSet queue to evaluate errors on their workers, as the actual limit is 100 and it can be easily be reached depending on the system size, causing dead workers to be discarded.Also related to #170 as this config is being set at compilation time and is not overwritable.
From
Verk.DeadSet