My otel colector is no longer a sidecar which prevented entire pod to be healthy, now it's a separate pod which is also constantly failing reaching uptrace service
here is the error below:
2024-01-16T18:11:54.875Z warn zapgrpc/zapgrpc.go:195 [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "my-uptrace:14317", ServerName: "my-uptrace:14317", }. Err: connection error: desc = "transport: Error while dialing: dial tcp: lookup my-uptrace on 10.28.0.10:53: no such host" {"grpc_log": true}
2024-01-16T18:11:56.002Z info exporterhelper/retry_sender.go:154 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "otlp/local", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup my-uptrace on 10.28.0.10:53: no such host\"", "interval": "28.778187984s"}
2024-01-16T18:11:56.007Z info exporterhelper/retry_sender.go:154 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "otlp/local", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup my-uptrace on 10.28.0.10:53: no such host\"", "interval": "34.284222836s"}
helm release values:
otelcol:
enabled: true
clickhouse:
enabled: true
persistence:
enabled: true
storageClassName: '' # leave empty to use the default storage class
size: 64Gi
postgresql:
enabled: false
service:
type: ClusterIP # or LoadBalancer
http_port: 14318
grpc_port: 14317
annotations: {}
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
hosts:
- host: redacted
paths:
- path: /
pathType: Prefix
tls:
- secretName: redacted-tls
hosts:
- redacted
uptrace:
config:
auth:
users:
- name: Anonymous
email: uptrace@localhost
password: uptrace
notify_by_email: false
ch:
user: default
password:
database: uptrace
debug: true
pg:
addr: redacted
user: uptrace
password: redacted
database: uptrace
projects:
# Conventionally, the first project is used to monitor Uptrace itself.
- id: 1
name: Uptrace
# Token grants write access to the project. Keep a secret.
token: redacted
pinned_attrs:
- service.name
- host.name
- deployment.environment
# Group spans by deployment.environment attribute.
group_by_env: false
# Group funcs spans by service.name attribute.
group_funcs_by_service: false
# Other projects can be used to monitor your applications.
# To monitor micro-services or multiple related services, use a single project.
site:
addr: 'https://redacted'
secret_key: 'redacted'
Not quite sure how I can affect config map from helm values to change otelcollector enpoint
My otel colector is no longer a sidecar which prevented entire pod to be healthy, now it's a separate pod which is also constantly failing reaching uptrace service here is the error below:
helm release values:
Not quite sure how I can affect config map from helm values to change otelcollector enpoint