status-im / nim-metrics

Nim metrics client library supporting the Prometheus monitoring toolkit, StatsD and Carbon
Other
40 stars 6 forks source link

Fix regressions introduced by #39 and don't expose broken metrics by default #43

Open arnetheduck opened 2 years ago

arnetheduck commented 2 years ago

https://github.com/status-im/nim-metrics/pull/39 introduces significant semantic regressions compared to the earlier implementation.

In the new implementation, by default:

https://github.com/status-im/nim-metrics/pull/41 adds additional workarounds: all the above issues now apply to per-thread metrics as well

Prometheus uses a pull model for metrics - it means that when metrics are fetched via http, they should be updated in that moment - in that model, all metrics that reasonably can be loaded on demand at the time of the poll should be loaded then - this includes memory statistics. It follows naturally that the previous implementation, except for nim gc metrics, was better in all aspects.

Because the system GC metrics come with several caveats and costs that make them unreliable unless used correctly, a better approach would be:

stefantalpalaru commented 2 years ago

Prometheus uses a pull model for metrics - it means that when metrics are fetched via http, they should be updated in that moment

That's a misunderstanding. A Prometheus "pull" is just a snapshot, not an update+snapshot.

"When Prometheus scrapes your instance's HTTP endpoint, the client library sends the current state of all tracked metrics to the server." - https://prometheus.io/docs/instrumenting/clientlibs/

arnetheduck commented 2 years ago

That's a misunderstanding. A Prometheus "pull" is just a snapshot, not an update+snapshot.

Yes, this can be confusing, if you come from a background of push metrics - when pulling metrics, you pull the latest value from the place that this value is stored. For the case where the value is pulled directly from its source, there is no separation between update and snapshot - it's one operation.

An example can help with understanding of how to implement for example process statistics - here you can see how process metrics are being collected as part of the update - there is no separate "push loop" that tries to push these metrics into a metric variable at random times:

https://github.com/prometheus/node_exporter/blob/master/collector/cpu_linux.go#L149

Here's how it's done in Java:

https://github.com/prometheus/client_java/blob/master/simpleclient_hotspot/src/main/java/io/prometheus/client/hotspot/StandardExports.java#L145

Python does a similar thing - to understand this, you must understand that "current state" involves reading that state from the source.

In eth2 clients, to give a more concrete example, the same situation applies: metrics that have a source of truth available at collection time would do well not to duplicate that information in separate variables - to better understand this, here's an example from lighthouse:

https://github.com/sigp/lighthouse/blob/b5921e4248fae19e874da595db55a124f22a538a/beacon_node/beacon_chain/src/metrics.rs#L925

Putting metrics in an instance variable and "manually" updating it by instrumenting the code with "update" calls is done when the underlying value otherwise would have been lost - for example because it is part of a computation where the results are discarded / not held in memory. This is convenient when you easily can instrument the code and colocate the update of the metric with the computation.

In the case of process metrics, you cannot colocate the two - the only option you have is to collect during the scrape, or implement hacks / workarounds like done in #39.

stefantalpalaru commented 2 years ago

You can't argue for updating all metrics on demand by looking at the special case of system metrics.

arnetheduck commented 2 years ago

No, I'm arguing that this is the way to implement a specific class of metrics, namely those for which you cannot instrument the code, to get timely and correct updates - these are the metrics that you read from a source, and I've named a few examples to help with understanding the context. This indeed does not mean that all metrics should work this way, and I write that too: some metrics don't have a state to read from, and for these metrics, it is indeed valuable to have a convenient storage location - the key point is that you can instrument the code that creates the metrics.

In the case of process metrics, you can't - this issue is about fixing this regression in how nim-metrics implements process metrics in particular, but the concept is generally useful for any application that exposes metrics via prometheus: pulling metrics at the time of the read is a standard feature that is useful.

stefantalpalaru commented 2 years ago

pulling metrics at the time of the read is a standard feature

I'd call it an implementation detail that should not be relied upon. The spec is quite clear about what is guaranteed: https://prometheus.io/docs/instrumenting/writing_clientlibs/

arnetheduck commented 2 years ago

The spec is quite clear about what is guaranteed: https://prometheus.io/docs/instrumenting/writing_clientlibs/

Yes, here's a relevant quote from the document:

If obtaining a necessary value is problematic or even impossible with the used language or runtime, client libraries SHOULD prefer leaving out the corresponding metric over exporting bogus, inaccurate, or special values (like NaN).

39 exposes inaccurate metrics at the time of the scrape.

You will also be helped by reading the part of the documentation that is relevant to exporting process metrics in general:

https://prometheus.io/docs/instrumenting/writing_exporters/#collectors

This is where the above examples are coming from - in particular, they specifically point out that "direct instrumentation", ie what #39 tries to do, should never be done.

stefantalpalaru commented 2 years ago

they specifically point out that "direct instrumentation", ie what https://github.com/status-im/nim-metrics/pull/39 tries to do, should never be done

Client libraries are not exporters. We are writing the former here, not the latter, so that different spec does not apply.

"libraries and servers which help in exporting existing metrics from third-party systems as Prometheus metrics" - https://prometheus.io/docs/instrumenting/exporters/

arnetheduck commented 2 years ago

third-party systems

well, you are doing exactly that: you are exporting the metrics of the process (those starting with process_) and of Nim itself (the ones that export GC metrics) - this is not Nim exporting metrics about itself, it is nim-metrics that exposes the metrics of another system, one where you cannot directly instrument the code.