Open AnujAroshA opened 2 years ago
Are you using "mode" or "agentCollector"/"standaloneCollector"?
I was able to run this command and successfully disable the logs and metrics pipeline
helm template testing open-telemetry/opentelemetry-collector --values ./charts/opentelemetry-collector/examples/deployment-otlp-traces/values.yaml
# Source: opentelemetry-collector/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: testing-opentelemetry-collector
labels:
helm.sh/chart: opentelemetry-collector-0.16.2
app.kubernetes.io/name: opentelemetry-collector
app.kubernetes.io/instance: testing
app.kubernetes.io/version: "0.50.0"
app.kubernetes.io/managed-by: Helm
data:
relay: |
exporters:
logging: {}
extensions:
health_check: {}
memory_ballast: {}
processors:
batch: {}
memory_limiter:
check_interval: 5s
limit_mib: 1638
spike_limit_mib: 512
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
service:
extensions:
- health_check
- memory_ballast
pipelines:
traces:
exporters:
- logging
processors:
- memory_limiter
- batch
receivers:
- otlp
telemetry:
metrics:
address: 0.0.0.0:8888
I was also able to remove logs and metrics pipeline when using standaloneCollector and agentCollector.
Can you provide more details on the commands you are running and your full values.yaml?
Are you using "mode" or "agentCollector"/"standaloneCollector"?
I'm actually using mode: deployment
I was able to run this command and successfully disable the logs and metrics pipeline
helm template testing open-telemetry/opentelemetry-collector --values ./charts/opentelemetry-collector/examples/deployment-otlp-traces/values.yaml
# Source: opentelemetry-collector/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: testing-opentelemetry-collector labels: helm.sh/chart: opentelemetry-collector-0.16.2 app.kubernetes.io/name: opentelemetry-collector app.kubernetes.io/instance: testing app.kubernetes.io/version: "0.50.0" app.kubernetes.io/managed-by: Helm data: relay: | exporters: logging: {} extensions: health_check: {} memory_ballast: {} processors: batch: {} memory_limiter: check_interval: 5s limit_mib: 1638 spike_limit_mib: 512 receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 service: extensions: - health_check - memory_ballast pipelines: traces: exporters: - logging processors: - memory_limiter - batch receivers: - otlp telemetry: metrics: address: 0.0.0.0:8888
I was also able to remove logs and metrics pipeline when using standaloneCollector and agentCollector.
Can you provide more details on the commands you are running and your full values.yaml?
My values file related to otel-collector part is as below
opentelemetry-collector:
mode: deployment
ports:
otlp:
enabled: true
otlp-http:
enabled: true
jaeger-compact:
enabled: false
jaeger-grpc:
enabled: false
jaeger-thrift:
enabled: false
metrics:
enabled: false
zipkin:
enabled: false
config:
exporters:
jaeger:
endpoint: otel-jaeger-collector:14250
tls:
insecure: true
extensions:
health_check: {}
processors: {}
receivers:
otlp:
protocols:
grpc: {}
http: {}
prometheus: {}
jaeger: null
zipkin: null
service:
extensions:
- health_check
pipelines:
traces:
receivers:
- otlp
processors: {}
exporters:
- jaeger
- logging
metrics: null
logs: null
I am able to use your values.yaml and turn off metrics and logs. Here is the resulting configmap after running helm template opentelemetry-collector open-telemetry/opentelemetry-collector --values ./values.yaml --output-dir testing
---
# Source: opentelemetry-collector/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: opentelemetry-collector
labels:
helm.sh/chart: opentelemetry-collector-0.20.0
app.kubernetes.io/name: opentelemetry-collector
app.kubernetes.io/instance: opentelemetry-collector
app.kubernetes.io/version: "0.53.0"
app.kubernetes.io/managed-by: Helm
data:
relay: |
exporters:
jaeger:
endpoint: otel-jaeger-collector:14250
tls:
insecure: true
logging: {}
extensions:
health_check: {}
memory_ballast: {}
processors:
batch: {}
memory_limiter:
check_interval: 5s
limit_mib: 1638
spike_limit_mib: 512
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${MY_POD_IP}:8888
service:
extensions:
- health_check
pipelines:
traces:
exporters:
- jaeger
- logging
processors: {}
receivers:
- otlp
telemetry:
metrics:
address: 0.0.0.0:8888
What command are you using?
Command I used
helm template otel helm/values.local.yaml --output-dir testing
Chart.yaml Otel related configuration
dependencies:
- name: opentelemetry-collector
alias: collector
repository: https://open-telemetry.github.io/opentelemetry-helm-charts
version: 0.x.x
One thing that I noticed is, we are pointing to apiVersion: v2
and seems you are pointing to v1
I have added the values.yaml changes in a previous comment.
One thing that I noticed is, we are pointing to apiVersion: v2 and seems you are pointing to v1
I am not sure what you mean. All the template files for this chart hard code v1. How are you changing it?
Also, what helm version are you using? Are you using the opentelemetry-collector chart as a subchart?
Yes, I'm using opentelemetry-collector as a dependency chart just like I show in the previous comment.
In that Chart.yaml we are using apiVersion: v2
.
Also the Helm version is v3.9.0
Maybe related to https://github.com/helm/helm/issues/9027
Yes this seems like an issue with using charts as dependency. A potential fix is coming in Helm 3.13.0, but not sure how soon this is coming.
I also tried providing empty arrays to receivers, but seems like the pod won't start because it won't accept a pipeline without receivers.
@TylerHelmuth would it be possible to provide a fix so we can provide an empty object {}
or just empty arrays and it would also disable the pipeline? Or is there a workaround that could work even for using this chart as a dependency?
Thanks in advance!
@AlissonRS you're correct that the real fix is coming in an upcoming helm release. In the meantime, when using the collector chart as a subchart, I don't believe there is a workaround. The collector rejects empty arrays in the receiver/exporter pipelines.
@TylerHelmuth for the time being, I'm just keeping the otlp receiver for the metrics pipeline, so as long as no one is actively sending logs to the collector, I'm assuming the pipeline will just stay idle, that's good enough for now.
service:
pipelines:
traces:
exporters: [ otlp/tempo ]
logs:
receivers: [filelog]
exporters: [loki]
# disable metrics pipeline
metrics:
exporters: [logging]
processors: [memory_limiter]
receivers: [otlp]
@TylerHelmuth the one thing I'm not sure though, I'll have my apps instrumented with OpenTelemetry SDK sending traces to the collector, so the trace pipeline can process and export to Grafana Tempo.
Since the metrics pipeline also uses otlp receiver, then I'm assuming it'll get a copy of all the trace data and process too, is that correct?
@AlissonRS no, the metrics pipeline will only get metrics data. Behind the scenes the otlpreceiver exposes endpoints like v1/traces
, v1/metrics
, and v1/logs
. OTel SDKs sending traces will export to v1/traces
, and the receiver will only send that data down the traces pipeline. If the receiver receives no data to the metrics or logs endpoint those pipelines will do nothing.
As mentioned in the basic top level configuration section in the document, we have set null values as below.
But when I see the ConfigMap it shows all the pipelines.
What we need is to get rid of logs and metrics and keep only traces.
Following are details of my testing environment, even though some of those are not directly impact with the issue.