Open PuppetA17 opened 3 years ago
The follow is jaeger-agent containers manifets yaml:
- args:
- --jaeger.tags=cluster=undefined,deployment.name=myapp,pod.namespace=observability,pod.name=${POD_NAME:},host.ip=${HOST_IP:},container.name=myapp
- --reporter.grpc.host-port=dns:///jaeger-collector-headless.observability.svc:14250
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
image: jaegertracing/jaeger-agent:1.20.0
imagePullPolicy: IfNotPresent
name: jaeger-agent
ports:
- containerPort: 5775
hostPort: 5775
name: zk-compact-trft
protocol: UDP
- containerPort: 5778
hostPort: 5778
name: config-rest
protocol: TCP
- containerPort: 6831
hostPort: 6831
name: jg-compact-trft
protocol: UDP
- containerPort: 6832
hostPort: 6832
name: jg-binary-trft
protocol: UDP
- containerPort: 14271
hostPort: 14271
name: admin-http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-4rwmk
readOnly: true
Could you give us the output of kubectl get services -n observability
? Are you able to consistently reproduce it using recent versions of minikube?
My k8s version is 1.20.4, jaeger version is v1.22.0. When I use jaeger operator to create agent, set the strategy to "DaemonSet" and set the "hostNetwork: true", I also met the same problem. I checked the /etc/resolv.conf and found that the pod of agent uses the resolv.conf of the node. So I changed the dnsPolicy of agent daemonset from "ClusterFirst" to "ClusterFirstWithHostNet", then the problem resolved.
Describe the bug jaeger agent report error:
To Reproduce Steps to reproduce the behavior:
if name == "main": log_level = logging.DEBUG logging.getLogger('').handlers = [] logging.basicConfig(format='%(asctime)s %(message)s', level=log_level)