One of the common debugging issues with Celery tasks are that they appear to "leak memory." Usually, this has nothing to do with Celery and everything to do with the code written in the task itself, but it's very difficult to pinpoint what tasks under what situations actually cause the problem.
For tasks that increase memory usage by more than a threshold amount during the executation of the task, spit out a log message right before returning the result.
Add a setting (default off) to enable memory logging by default
Add a per-task configuration for enabling or disabling memory logging, which override the setting
Add a per-task configuration for the memory increase threshold that triggers the logging (setting it to 0 logs everything)
Add a setting for the default logger configuration for this celery_mem_usage.log logger
Before calculate_result is called, grab the RAM usage. Then, before we return the result, get RAM usage again. If it's increased by more than the threshold amount, spit out a WARNING message with:
task id (which allows tracing back to other loggers)
One of the common debugging issues with Celery tasks are that they appear to "leak memory." Usually, this has nothing to do with Celery and everything to do with the code written in the task itself, but it's very difficult to pinpoint what tasks under what situations actually cause the problem.
For tasks that increase memory usage by more than a threshold amount during the executation of the task, spit out a log message right before returning the result.
0
logs everything)celery_mem_usage.log
loggerThis can all be done with the psutil module. Here is an example for getting RAM usage of current process in python.
Before
calculate_result
is called, grab the RAM usage. Then, before we return the result, get RAM usage again. If it's increased by more than the threshold amount, spit out aWARNING
message with: