Open sabiland opened 1 year ago
I have a similar problem, in my job I start many jobs, and looking in sql Azure and Application insights this sentence takes from 2 to 5 seconds to run
set xact_abort on; set nocount on; declare @jobId bigint;
begin tran;
insert into [HangFire].Job (InvocationData, Arguments, CreatedAt, ExpireAt) values (@invocationData, @arguments, @createdAt, @expireAt);
select @jobId = scope_identity(); select @jobId;
insert into [HangFire].JobParameter (JobId, Name, Value) values (@jobId, @name1, @value1), (@jobId, @name2, @value2);
commit tran;
this is my startup
services.AddHangfire(opts => opts.UseConsole()
.UseSqlServerStorage(configuration.GetConnectionString(DEFAULTCONN),
new Hangfire.SqlServer.SqlServerStorageOptions()
{
CommandTimeout = TimeSpan.FromMinutes(5),
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
DisableGlobalLocks = true
})
);
What are te best options to optmize the job creation?
We setup new empty Hangfire DB, and I've added initial recommended settings parameters for Hangfire (did not have them before). Everything works ok for now.
services.AddHangfire(configuration => configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage("{connection_string}", new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
DisableGlobalLocks = true
}));
We setup new empty Hangfire DB, and I've added initial recommended settings parameters for Hangfire (did not have them before). Everything works ok for now.
services.AddHangfire(configuration => configuration .SetDataCompatibilityLevel(CompatibilityLevel.Version_170) .UseSimpleAssemblyNameTypeSerializer() .UseRecommendedSerializerSettings() .UseSqlServerStorage("{connection_string}", new SqlServerStorageOptions { CommandBatchMaxTimeout = TimeSpan.FromMinutes(5), SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5), QueuePollInterval = TimeSpan.Zero, UseRecommendedIsolationLevel = true, DisableGlobalLocks = true }));
Where did you get this recomended parameters @sabiland ? do you have a link?
They are on official Hangfire site ;-) Recommended Hangfire Init settings
how did I miss this, did you have to erase all of te jobs or you can just change on the fly?
We setup new empty DB and I just 1) Added Init Hangfire settings 2) Changed connection string to the new DB 3) Re-run the app. We didn't need to preserve existing jobs.
That's my problem, I have a lot of running jobs, I'll try and see what I loose
thanks
@luizfbicalho did you manage to solve the problem?
I'v just upgraded from v1.7.11 to 1.7.34, after that tasks from Enqueued began to be distributed much more slowly.
I have one point where I enqueue about 50 tasks, and it runs in about 4 minutes, I hanged my code to enqueue one task to enqueue 50 other tasks
It seems to me, I'v managed to overcome the issue with slow dequeue. The issue appeared in version 1.7.28 - with long polling, if polling interval is less than 1 second. I didn't want to rollback from 1.7.34 to 1.7.27, so I changed QueuePollInterval to 1 sec, and issue resolved for my environment.
I'm not sure if it's my case, because It's slow to enqeue, in an sql azure with a high DTU configuration, so I don't see what's going on
Anyway I'm gonna test it
Thanks @ivanovevgeny
I have a scenario, where in the evenings I need up to 30-40/sec
BackgroundJob.Enqueue
calls. Is maybe this too much forHangfire
to handle? Because our DB guys noticed that there are some blockings/deadlocks on DB when this happens (in the evenings, when I do so manyBackgroundJob.Enqueue
calls). I am using default Hangfire init parameters in startup.cs. Could I optimize init somehow?EDIT: We use
MSSQL
. Could problem be with DB to be too overloaded with other things? This our production DB runs cca. 150 DataBases.EDIT2: Our DB guys made analysis about this situation. They found out there are periodical bursts of Hangfire Update sql statements (10.000 - 20.000 updates) and DB blocks it on purpose for some time. Is there a reason for so many update calls to Job table ? Job table is around 50GB big. They said probably problem is with reindexing. Any suggestions?
They also said
forceseek
hint should not be used and could lead to problems.