Over the last few months, we'd had a single logger worker, that makes a note of every request we get, and log it to a single massive, greenchecks table.
As we've seen a more than tenfold uptick in traffic, this logger worker has struggled to keep up, meaning the backlog grew faster than the logger could work through it.
How it works at present
Because of memory leak in php, we had some code like this to:
keep a record of the number of records to write to the database in one batch
write in a batch to the database
count the number of times data was written by the logger to the database,
then sleep() and die() to avoid the memory leak bringing down the machine
allow supervisor to spin up a new Logging worker to resume the work
What this PR changes
This changes two things:
length of the sleep we use before the logger dies, reducing the time before supervisor spins up a new one
increases the number of times the logger can do a batch write before it sleeps and dies
This should increase the maximum throughput, in terms of records written by logger.
Over the last few months, we'd had a single logger worker, that makes a note of every request we get, and log it to a single massive,
greenchecks
table.As we've seen a more than tenfold uptick in traffic, this logger worker has struggled to keep up, meaning the backlog grew faster than the logger could work through it.
How it works at present
Because of memory leak in php, we had some code like this to:
sleep()
anddie()
to avoid the memory leak bringing down the machineWhat this PR changes
This changes two things:
This should increase the maximum throughput, in terms of records written by logger.