Closed astingengo closed 10 months ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Is this the only log line printed, even with logging set to debug?
2023-05-16T09:44:44.227Z info service/telemetry.go:113 Setting up own telemetry...
The log was not copied correctly so the scroll was horizontal I was able to copy and paste it vertically now
This is all log
Thanks for updating the log. Unfortunately, there's not much to go on here.
My recommendation would be to troubleshoot by isolating components.
hostmetrics
).file
or logging
exporter).I guess I founded the issue but not the solution.
"Cannot create liveness probe.","opentelemetrycollector":"uptrace/rabbitmq","error":"service property in the configuration doesn't contain extensions","stacktrace"
it seems it expects a health check
as if I'll try with empty health check will not work as:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: rabbitmq
namespace: uptrace
spec:
image: otel/opentelemetry-collector-contrib:0.77.0
config: |
receivers:
rabbitmq:
endpoint: http://rabbit:15672
username: user
password: pass
collection_interval: 5s
processors:
extensions:
health_check: {}
exporters:
otlp/uptrace:
endpoint: uptrace:4317
headers:
# Copy your project DSN here
uptrace-dsn: 'http://project3_secret_token@uptrace:14317/2'
service:
extensions: [health_check]
pipelines:
metrics:
receivers: [rabbitmq]
processors: []
exporters: [otlp/uptrace]
will output
"controllers.OpenTelemetryCollector","msg":"couldn't determine metrics port from configuration, using 8888 default value","opentelemetrycollector":"uptrace/rabbitmq","error":"missing port in address"}
This appears to be related to https://github.com/open-telemetry/opentelemetry-helm-charts/issues/242 and https://github.com/open-telemetry/opentelemetry-helm-charts/issues/697.
From what I can tell, this doesn't appear to be related to rabbitmq. I'll leave the issue open for further discussion but I'd encourage you to engage on the linked issues.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
it's any update for this problem? becase i have some problem when i use RabbitMQ receiver to get data from management API RabbitMQ?
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
I can't seem to get this working even locally.
rabbitmq:3.11.24-management otel/opentelemetry-collector-contrib:0.98.0
otlp:
protocols:
grpc:
endpoint: "otel_collector:55680"
rabbitmq:
endpoint: http://rabbitmq:15672
username: otel
password: otel
collection_interval: 10s
processors:
batch:
send_batch_size: 1024
timeout: 5s
exporters:
debug:
verbosity: normal
otlp/grafana:
endpoint: tempo:4317
tls:
insecure: true
extensions:
zpages:
endpoint: :55679
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana]
metrics:
receivers: [rabbitmq]
processors: []
exporters: [debug]
"users": [
{
"name": "otel",
"password": "otel",
"tags": "monitoring"
}
]
Connection is made with no errors but no metrics are generated. Adding a hostmetrics receiver verifies everything is set up properly.
2024-04-15 20:34:27 2024-04-16T02:34:27.823Z info MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 0, "metrics": 0, "data points": 0}
Is there any way to debug further?
I found the issue locally. I think these docs are quite misleading. You still have to grant permissions to all the vhosts for the user to be able to access the vhost.
For anyone else looking for an easy way to manage this with new vhosts, this functionality was added in 3.11.11 to automatically grant permissions for a user.
Component(s)
receiver/rabbitmq
What happened?
Description
Trying to get metrics from RabbitMQ but I see nothing
Steps to Reproduce
Check additional context
Expected Result
Seeing metrics from RabbitMQ
Actual Result
No metrics
Collector version
0.77.0
Environment information
Environment
OS: Ubuntu 22.04 Kubernetes: 1.25.6
OpenTelemetry Collector configuration
Log output
Additional context