Closed sunng87 closed 1 week ago
The recent changes enhance the object-store
module by introducing the extract_parent_path
function, which extracts parent paths from object storage operations. Additionally, Prometheus metrics are refined to include path-specific labels, providing more detailed insights into operations based on paths. These changes aim to improve path handling and monitoring granularity inside the module.
File Path | Change Summary |
---|---|
src/object-store/src/util.rs |
Added extract_parent_path function with tests and refined normalize_path function with additional comments. |
src/object-store/src/layers/prometheus.rs |
Enhancements to Prometheus metrics tracking by adding "path" as a label value in various functions and methods. |
In the forest of code so fine,
Paths converge, metrics align.
Parent paths we now unveil,
Monitoring each with detail.
Prometheus smiles, paths no longer coy,
Every change a tune, coding's joy 🎶.
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Attention: Patch coverage is 49.20635%
with 64 lines
in your changes missing coverage. Please review.
Project coverage is 84.67%. Comparing base (
b5c6c72
) to head (7e98924
).
The official PrometheusLayer
already supports this, I think we should switch to it
https://docs.rs/opendal/latest/opendal/layers/struct.PrometheusLayer.html#method.enable_path_label
According to the patch comment, we forked it to avoid a panic with metrics registration. https://github.com/GreptimeTeam/greptimedb/pull/2861
Checked upstream it doesn't seem to be addressed after our fork. Also our solution, using lazy_static
, is not suitable to upstream because it's not suitable for a library.
According to the patch comment, we forked it to avoid a panic with metrics registration. #2861
Checked upstream it doesn't seem to be addressed after our fork. Also our solution, using
lazy_static
, is not suitable to upstream because it's not suitable for a library.
The layer itself could be cloned. So we can use lazy_static to init a global PrometheusLayer
that uses the default registry. Then we can clone the global layer and reuse it.
Yes, I realized that the layer will create a PrometheusMetrics
in each call to the Layer::layer()
implementation. The PrometheusMetrics
always registers metrics to the provided registry. However, the registry doesn't allow registering the same metric multiple times.
So we can't create multiple Operator
with the same prometheus layer.
https://github.com/apache/opendal/blob/174bda53f79123cd114d2409189423a0a4cf6bf3/core/src/layers/prometheus.rs#L193-L210
According to the patch comment, we forked it to avoid a panic with metrics registration. #2861 Checked upstream it doesn't seem to be addressed after our fork. Also our solution, using
lazy_static
, is not suitable to upstream because it's not suitable for a library.~The layer itself could be cloned. So we can use lazy_static to init a global
PrometheusLayer
that uses the default registry. Then we can clone the global layer and reuse it.~Yes, I realized that the layer will create a
PrometheusMetrics
in each call to theLayer::layer()
implementation. ThePrometheusMetrics
always registers metrics to the provided registry. However, the registry doesn't allow registering the same metric multiple times.So we can't create multiple
Operator
with the same prometheus layer. https://github.com/apache/opendal/blob/174bda53f79123cd114d2409189423a0a4cf6bf3/core/src/layers/prometheus.rs#L193-L210
I created an issue for this. https://github.com/apache/opendal/issues/4854
Maybe @Xuanwo can give us some suggestions.
I created an issue for this. apache/opendal#4854
Maybe @Xuanwo can give us some suggestions.
Thanks! I'm also thinking allow users to set metrics or reuse them in someway. I will give it look.
Actually I found the Prometheus library has a few runtime panics which I think can be avoided by design. For example, if you provide labels doesn't match the declaration, it ends up with a runtime panics, which is very dangerous for some corner cases.
Actually I found the Prometheus library has a few runtime panics which I think can be avoided by design. For example, if you provide labels doesn't match the declaration, it ends up with a runtime panics, which is very dangerous for some corner cases.
Would you like to submit an issue to upstream and link back here?
Actually I found the Prometheus library has a few runtime panics which I think can be avoided by design. For example, if you provide labels doesn't match the declaration, it ends up with a runtime panics, which is very dangerous for some corner cases.
It provides a method get_metric_with_label_values()
that returns an error instead.
https://docs.rs/prometheus/latest/prometheus/core/struct.MetricVec.html#method.get_metric_with_label_values
Thanks! I'm also thinking allow users to set metrics or reuse them in someway. I will give it look.
@Xuanwo I can submit a patch for a workaround by adding a method to set the metrics
.
impl PrometheusLayer {
fn metrics(mut self, metrics: Arc<PrometheusMetrics>) -> Self {
self.metrics = Some(metrics);
self
}
}
But this may not be elegant enough.
The CI should be fixed in main branch🥲
I hereby agree to the terms of the GreptimeDB CLA.
Refer to a related PR or issue link (optional)
What's changed and what's your intention?
This patch add a path label to object store related prometheus metrics, so that we can observe which path has most writes/reads.
Checklist
Summary by CodeRabbit