mdawar / rq-exporter

Prometheus metrics exporter for Python RQ (Redis Queue).
MIT License
65 stars 28 forks source link

rq_ metrics are not collected by the exporter #33

Open ben5556 opened 7 months ago

ben5556 commented 7 months ago

The below metrics are not collected by the exporter

rq_workers | Gauge | name, queues, state | RQ workers
rq_jobs | Gauge | queue, status | RQ jobs by queue and status
rq_workers_success_total | Counter | name, queues | Successful job count by worker
rq_workers_failed_total | Counter | name, queues | Failed job count by worker
rq_workers_working_time_total | Counter | name, queues | Total working time in seconds by worker

Debug logs of the container:

[2023-12-01 18:33:56] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:33:56] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished
[2023-12-01 18:33:56] [rq_exporter] [INFO]: Serving the application on 0.0.0.0:9726
[2023-12-01 18:34:00] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:34:00] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished
[2023-12-01 18:34:04] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:34:04] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished
[2023-12-01 18:34:14] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:34:14] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished
[2023-12-01 18:34:15] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:34:15] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished
[2023-12-01 18:34:24] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:34:24] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished
[2023-12-01 18:34:30] [rq_exporter.collector] [DEBUG]: Collecting the RQ metrics...
[2023-12-01 18:34:30] [rq_exporter.collector] [DEBUG]: RQ metrics collection finished

Metrics exposed by the exporter

# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 258.0
python_gc_objects_collected_total{generation="1"} 268.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 65.0
python_gc_collections_total{generation="1"} 5.0
python_gc_collections_total{generation="2"} 0.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="8",patchlevel="17",version="3.8.17"} 1.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 2.58596864e+08
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 3.0138368e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.70145563537e+09
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.19
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 7.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP rq_request_processing_seconds Time spent collecting RQ data
# TYPE rq_request_processing_seconds summary
rq_request_processing_seconds_count 22.0
rq_request_processing_seconds_sum 0.03231144789606333
# HELP rq_request_processing_seconds_created Time spent collecting RQ data
# TYPE rq_request_processing_seconds_created gauge
rq_request_processing_seconds_created 1.7014556361013677e+09
# HELP rq_workers RQ workers
# TYPE rq_workers gauge
# HELP rq_workers_success_total RQ workers success count
# TYPE rq_workers_success_total counter
# HELP rq_workers_failed_total RQ workers fail count
# TYPE rq_workers_failed_total counter
# HELP rq_workers_working_time_total RQ workers spent seconds
# TYPE rq_workers_working_time_total counter
# HELP rq_jobs RQ jobs by state
# TYPE rq_jobs gauge
ben5556 commented 7 months ago

Using latest docker image

mdawar commented 7 months ago

It seems that you don't have any workers running and there are no jobs enqueued, this is the initial state when there are no metrics to display.

Take a look at the docker-compose.yml file for a complete example, you can start an example dev environment using docker compose up.

ben5556 commented 7 months ago

Thanks. I would expect it to show metric but with a value of 0 if there are no jobs enqueued ?

On Sat, 2 Dec 2023 at 5:54 PM, Pierre Mdawar @.***> wrote:

It seems that you don't have any workers running and there are no jobs enqueued, this is the initial state when there are no metrics to display.

Take a look at the docker-compose.yml https://github.com/mdawar/rq-exporter/blob/master/docker-compose.yml file for a complete example, you can start an example dev environment using docker compose up.

— Reply to this email directly, view it on GitHub https://github.com/mdawar/rq-exporter/issues/33#issuecomment-1837136352, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJKLW7JVNUOBSV5ZKG6Y65DYHMMYBAVCNFSM6AAAAABADHNX52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZXGEZTMMZVGI . You are receiving this because you authored the thread.Message ID: @.***>

mdawar commented 7 months ago

The Prometheus client automatically exports a 0 value for metrics without labels, all the metrics that you referred to have labels and are only exported when the data is available.

ben5556 commented 7 months ago

Thanks a lot. I will test this on production and let you know.

On Sat, 2 Dec 2023 at 6:24 PM, Pierre Mdawar @.***> wrote:

The Prometheus client automatically exports a 0 value https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics for metrics without labels, all the metrics that you referred to have labels and are only exported when the data is available.

— Reply to this email directly, view it on GitHub https://github.com/mdawar/rq-exporter/issues/33#issuecomment-1837142216, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJKLW7MEHV535U4J775R6EDYHMQJZAVCNFSM6AAAAABADHNX52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZXGE2DEMRRGY . You are receiving this because you authored the thread.Message ID: @.***>

pen-pal commented 1 month ago

were you able to fix it?