Open rapphil opened 1 year ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This core issue is probably the same as in: https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18408
@rapphil Not sure if you managed to work this around, but until fix is implemented you can utilize jmx-metrics plugin that is included in otel java agent (also in aws distro). Define each JMX item as separate gauge metric and it will be exported just fine by awsemf.
java -javaagent:/path/otel-agent.jar -Dotel.jmx.config=/path/jmx.yaml
Example jmx.yaml
---
rules:
- bean: java.lang:type=Memory
mapping:
HeapMemoryUsage.used:
metric: yourmetric.memory.heapmemoryusage.used
type: gauge
desc: The current heap size
unit: By
HeapMemoryUsage.max:
metric: yourmetric.memory.heapmemoryusage.max
type: gauge
desc: The maximum allowed heap size
unit: By
- bean: java.lang:name=G1 Old Gen,type=MemoryPool
mapping:
Usage.max:
metric: yourmetric.memorypool.g1oldgen.max
type: gauge
desc: G1 Old Gen pool memory currently max
unit: By
Usage.used:
metric: yourmetric.memorypool.g1oldgen.used
type: gauge
desc: G1 Old Gen pool memory currently used
unit: By
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
Component(s)
exporter/awsemf
What happened?
Description
It seems that the upDownCounter in OpenTelemetry doesn’t work well with the awsemfexporter.
Take JVM memory metrics as an example, the java agent registers memory metrics with upDownCounterBuilder.
Then I use the emf exporter to publish the metrics to CloudWatch. In the CloudWatch (see the attached image), it shows the the “increments/decrements” instead of the “current value”. (In this particular case, I want to know the current memory limit instead of the increments)
Steps to Reproduce
Use the OpenTelemetry Java agent to get runtime process metrics about the heap and try to send those metrics to CloudWatch using emf.
Expected Result
We expect to see the actual value for the heap size instead of the delta.
Actual Result
We can only see the values in the form of increments.
Collector version
0.78.0
Environment information
Environment
NA
OpenTelemetry Collector configuration
Log output
Additional context
NA