I just stumbled upon a stuck librelambda run in one of our prod sites, outputting pages of text like this:
After digging into just the conversion candidates in the file_conversion table, for that file ID, there were 1211 rows
I suspect this is the cause of a lot of the load on the DB that is originating from librelambda, as the first step that in the get_conversions_for_file is joining all 1200 of those rows.
To make it worse, cron then has to make 25k web requests to poll this connection, and is probably getting throttled at some point.
These conversions are clearly cooked, the oldest has been active for 9 days for the fileid from the screenshot. They should be cleaned up
I just stumbled upon a stuck librelambda run in one of our prod sites, outputting pages of text like this:
After digging into just the conversion candidates in the file_conversion table, for that file ID, there were 1211 rows
I suspect this is the cause of a lot of the load on the DB that is originating from librelambda, as the first step that in the get_conversions_for_file is joining all 1200 of those rows.
To make it worse, cron then has to make 25k web requests to poll this connection, and is probably getting throttled at some point.
These conversions are clearly cooked, the oldest has been active for 9 days for the fileid from the screenshot. They should be cleaned up