When using the prometheus remote write extension for k6, the k6_http_req_failed_rate is useless. It doesn't increase or decrease, it jumps to 1 on the first error and stays there.
During the active phase, I deleted 30% of the pods for the target service. This caused request errors as expected but is not reported in the metrics.
Expected behaviour
Because this is a rate metric, it should show a brief spike and then fall back to 0. An alternative fix, would be to have a k6_http_req_failed_total which prometheus can then turn it into a rate.
Actual behaviour
In this image, the metric rate jumped to 1 and stayed there. This isn't correct as it should have dropped back to zero after the system adjusted.
Brief summary
When using the prometheus remote write extension for k6, the
k6_http_req_failed_rate
is useless. It doesn't increase or decrease, it jumps to 1 on the first error and stays there.k6 version
0.47
OS
docker image
Docker version and image (if applicable)
grafana/k6:0.47.0
Steps to reproduce the problem
Config:
K6 script:
k6 output
During the active phase, I deleted 30% of the pods for the target service. This caused request errors as expected but is not reported in the metrics.
Expected behaviour
Because this is a rate metric, it should show a brief spike and then fall back to 0. An alternative fix, would be to have a k6_http_req_failed_total which prometheus can then turn it into a rate.
Actual behaviour
In this image, the metric rate jumped to 1 and stayed there. This isn't correct as it should have dropped back to zero after the system adjusted.