open-telemetry / opentelemetry-python

OpenTelemetry Python API and SDK
https://opentelemetry.io
Apache License 2.0
1.78k stars 623 forks source link

Keeps Exporting the Last Value Set for Metric #2758

Closed naveenkumarthangaraj closed 2 years ago

naveenkumarthangaraj commented 2 years ago

HI Team,

I am referring the examples for Python Metric from the sample from below link but it is exporting the last passed value automatically. It will stop after 8 - 10 mins.

is it problem in the python sdk or collector or exporter?

https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/metrics

# Package Version
asgiref==3.5.2
backoff==1.11.1
backports.zoneinfo==0.2.1
certifi==2022.5.18.1
charset-normalizer==2.0.12
Deprecated==1.2.13
Django==4.0.5
djangorestframework==3.13.1
googleapis-common-protos==1.56.2
grpcio==1.46.3
idna==3.3
iteration-utilities==0.11.0
opentelemetry-api==1.12.0rc1
opentelemetry-exporter-otlp==1.12.0rc1
opentelemetry-exporter-otlp-proto-grpc==1.12.0rc1
opentelemetry-exporter-otlp-proto-http==1.12.0rc1
opentelemetry-proto==1.12.0rc1
opentelemetry-sdk==1.12.0rc1
opentelemetry-semantic-conventions==0.31b0
protobuf==3.20.1
pytz==2022.1
requests==2.27.1
six==1.16.0
sqlparse==0.4.2
typing-extensions==4.2.0
urllib3==1.26.9
wrapt==1.14.1

Here is Code

from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
    OTLPMetricExporter,
)
from opentelemetry.metrics import (
    CallbackOptions,
    Observation,
    get_meter_provider,
    set_meter_provider,
)
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader

exporter = OTLPMetricExporter(insecure=True)
reader = PeriodicExportingMetricReader(exporter)
provider = MeterProvider(metric_readers=[reader])
set_meter_provider(provider)

meter = get_meter_provider().get_meter("getting-started", "0.1.2")

counter = meter.create_counter("new-sample")
counter.add(30)

histogram = meter.create_histogram("new-histogram")
histogram.record(20.1) 

here is Collector Configuration

receivers:
  otlp:
    protocols:
      grpc:
      http:
exporters:
  otlp:
    endpoint: "otel-collector:4317"
    insecure: true
    sending_queue:
      num_consumers: 4
      queue_size: 100
    retry_on_failure:
      enabled: true
  logging:
    logLevel: debug
  jaeger:
    endpoint: jaeger-all-in-one:14250
    insecure: true
  prometheus:
    endpoint: "otel-collector:8889"
processors:
  batch:
  memory_limiter:
    limit_mib: 400
    spike_limit_mib: 100
    check_interval: 5s
extensions:
  zpages: {}
  memory_ballast:
    size_mib: 165
service:
  extensions: [zpages, memory_ballast]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [otlp/elastic]
    metrics:
      receivers: [otlp]
      exporters: [logging, prometheus]

Collector Logs (it looks like every 2 sec it is automatic severing\recieving the Metric to Exporter) :

2022-06-13T09:43:45.555Z        debug   prometheusexporter@v0.35.0/collector.go:225     metric served: Desc{fqName: "new_sample", help: "", constLabels: {}, variableLabels: []}       {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:43:45.555Z        debug   prometheusexporter@v0.35.0/collector.go:225     metric served: Desc{fqName: "new_histogram", help: "", constLabels: {}, variableLabels: []}    {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:43:48.658Z        debug   memorylimiter/memorylimiter.go:270      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "cur_mem_mib": 10}
2022-06-13T09:43:53.657Z        debug   memorylimiter/memorylimiter.go:270      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "cur_mem_mib": 10}
2022-06-13T09:43:55.555Z        debug   prometheusexporter@v0.35.0/collector.go:213     collect called  {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:43:55.555Z        debug   prometheusexporter@v0.35.0/accumulator.go:238   Accumulator collect called      {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:43:55.555Z        debug   prometheusexporter@v0.35.0/collector.go:225     metric served: Desc{fqName: "new_histogram", help: "", constLabels: {}, variableLabels: []}    {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:43:55.555Z        debug   prometheusexporter@v0.35.0/collector.go:225     metric served: Desc{fqName: "new_sample", help: "", constLabels: {}, variableLabels: []}       {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:43:58.657Z        debug   memorylimiter/memorylimiter.go:270      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "cur_mem_mib": 10}
2022-06-13T09:44:03.658Z        debug   memorylimiter/memorylimiter.go:270      Currently used memory.  {"kind": "processor", "name": "memory_limiter", "cur_mem_mib": 10}
2022-06-13T09:44:05.554Z        debug   prometheusexporter@v0.35.0/collector.go:213     collect called  {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:44:05.554Z        debug   prometheusexporter@v0.35.0/accumulator.go:238   Accumulator collect called      {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:44:05.554Z        debug   prometheusexporter@v0.35.0/collector.go:225     metric served: Desc{fqName: "new_sample", help: "", constLabels: {}, variableLabels: []}       {"kind": "exporter", "name": "prometheus"}
2022-06-13T09:44:05.554Z        debug   prometheusexporter@v0.35.0/collector.go:225     metric served: Desc{fqName: "new_histogram", help: "", constLabels: {}, variableLabels: []}    {"kind": "exporter", "name": "prometheus"}

Here is the screen shot of the Exported Metrics

image

naveenkumarthangaraj commented 2 years ago

HI Team,

can you please help us on this issue?

Thanks, Naveen T

ocelotl commented 2 years ago

HI Team,

can you please help us on this issue?

Thanks, Naveen T

Hello @naveenkumarthangaraj, I'll investigate, thanks for reporting :+1:

ocelotl commented 2 years ago

Please fix your collector configuration, it is not indented. How are you running the collector?

codeboten commented 2 years ago

@ocelotl the description contained indentation that werent showing up in markdown, i just added a codeblock around it

ocelotl commented 2 years ago

@naveenkumarthangaraj Please check your configuration, there are some invalid keys, like insecure below otlp:.

aabmass commented 2 years ago

If I understand right, you're using the example to send metrics to the collector with OTLP exporter. The example sends two points and then the script ends. Then you have prometheus scraping the collector's Prometheus exporter?

it is exporting the last passed value automatically.

@naveenkumarthangaraj this is how prometheus works, it will keep exposing the previous cumulative value even if no new points come in. I imagine this is happening in the collector's prometheus exporter in this case.

It will stop after 8 - 10 mins.

That sounds like a bug in the collector's prometheus exporter maybe? Or possibly that exporter is reclaiming memory by dropping metrics it hasn't seen for several minutes?

ocelotl commented 2 years ago

Hey @naveenkumarthangaraj :v:

From what @aabmass mentions, this looks like expected behavior. I am closing this issue, please reopen if you are still facing difficulties.