Closed HansHabraken closed 11 months ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
Hi @HansHabraken. Thanks for reporting the issue.
I cannot reproduce it yet. I don't get the doubled /hostfs
prefix. Do you run it locally with docker run
or another way? Can you please provide more details that can help to reproduce it?
Hi @dmitryax, Yes sure. We use docker compose to run our application locally together with the collection. Here is our docker-compose.yaml
services:
application:
image: applicaitionImage:latest
ports:
- 1234:1234
- ...
environment:
OTEL_JAVAAGENT_DEBUG: true
OTEL_SERVICE_NAME: application-name
SPLUNK_METRICS_ENABLED: false
OTEL_RESOURCE_ATTRIBUTES: service.name=application-name,deployment.environment=test
...
otel-collector:
image: quay.io/signalfx/splunk-otel-collector:latest
command: [--config=/etc/splunk-otel-collector-config.yaml]
volumes:
- /:/hostfs
- /tmp/splunk-otel-collector-config.yaml:/etc/splunk-otel-collector-config.yaml
network_mode: "service:application"
environment:
SPLUNK_TRACE_ACCESS_TOKEN: ${SPLUNK_TRACE_ACCESS_TOKEN}
SPLUNK_METRIC_ACCESS_TOKEN: ${SPLUNK_METRIC_ACCESS_TOKEN}
...
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
Component(s)
receiver/hostmetrics
What happened?
Description
Hi, we are currently running the Splunk distribution for the opentelemetry collector inside a docker container. To get the correct metrics from the host, we mount the entire filesystem of the host inside the container and specify the root_path configuration as described in the documentation. Running the collector container results in errors from the filesystem scrapper and it looks that the root_path and mountpoint are sometimes concatenated. I think I have traced this back to this code. Is this a bug or is there something missing in the documentation?
Steps to Reproduce
/:/hostfs
root_path
configuration to/hostfs
hostmerics receiver
in the collector config with thefileystem
scraper enabledExpected Result
Should run without any errors
Actual Result
Filesystem scrapper logs following errors
Collector version
v0.72.0
Environment information
Environment
Container image: quay.io/signalfx/splunk-otel-collector:latest Host OS: MacOs Ventura 13.1 (M1)
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response