Open msherif1234 opened 8 months ago
Can you double-check if the Go instrumentation and application containers share the process namespace?
Reference:
Can you double-check if the Go instrumentation and application containers share the process namespace?
Reference:
- https://github.com/open-telemetry/opentelemetry-go-instrumentation#instrument-an-application-in-kubernetes
- https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/ Thanks @pellared that was it can u pls share with me a way to see those instrumentations ? I tried where
172.30.140.13
is the clusterIP svc[root@ci-ln-8hfrsd2-72292-c84sf-worker-a-g7qpd /]# grpcurl -plaintext 172.30.140.13:4317 list Failed to list services: server does not support the reflection API
this is what I set
Name: "OTEL_EXPORTER_OTLP_ENDPOINT", Value: "172.30.140.13:4317",
this is what I see in the container logs
{"level":"info","ts":1700514825.3263397,"logger":"Instrumentation.Controller","caller":"opentelemetry/controller.go:54","msg":"got event","attrs":[{"Key":"net.peer.port","Value":{"Type":"STRING","Value":"2055"}},{"Key":"rpc.system","Value":{"Type":"STRING","Value":"grpc"}},{"Key":"rpc.service","Value":{"Type":"STRING","Value":"/pbflow.Collector/Send"}},{"Key":"net.peer.name","Value":{"Type":"STRING","Value":"10.0.128.4"}}]}
2023/11/20 21:13:45 traces export: Post "https://localhost:4318/v1/traces": dial tcp [::1]:4318: connect: connection refused
where 10.0.128.4
is the podIP
Hi everyone! I am having the same issue when instrumenting Go using the operator.
{"level":"info","ts":1705690630.9377563,"logger":"Instrumentation.Analyzer","caller":"process/discover.go:73","msg":"process not found yet, trying again soon","exe_path":"/app"}
I am using the following autoinstrumentation library:
ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.10.1-alpha
I can confirm the pods have the config:
shareProcessNamespace: true
I can also confirm that the container gets injected with the following attribute:
securityContext: privileged: true runAsUser: 0
Am I missing something? Thanks in advance!
Hi everyone! I am having the same issue when instrumenting Go using the operator.
{"level":"info","ts":1705690630.9377563,"logger":"Instrumentation.Analyzer","caller":"process/discover.go:73","msg":"process not found yet, trying again soon","exe_path":"/app"}
I am using the following autoinstrumentation library:
ghcr.io/open-telemetry/opentelemetry-go-instrumentation/autoinstrumentation-go:v0.10.1-alpha
I can confirm the pods have the config:
shareProcessNamespace: true
I can also confirm that the container gets injected with the following attribute:
securityContext: privileged: true runAsUser: 0
Am I missing something? Thanks in advance!
@lel-war Are you using OTEL_GO_AUTO_TARGET_EXE
or instrumentation.opentelemetry.io/otel-go-auto-target-exe
?
Is your go executable full path /app
(as seen to passed to the instrumentation in the log you attached)
Hi @RonFed thanks for the quick response. To answer your question I am using the following annotation:
instrumentation.opentelemetry.io/otel-go-auto-target-exe: /app
The value "/app" is just an example of the real application, in reality it looks more like /home/user/app. So to answer your question, yes!
Hello there!
Out of curiosity, did you find any solution?
I'm having the same issue.
I created a debug/ephemeral container in order to verify the path of the executable and it seems to be the correct one.
Am I missing something? Do you have any idea?
Describe the bug
Not sure how to make my go app visible to instrumentation pod
Environment
running on OCP cluster
To Reproduce
Steps to reproduce the behavior:
install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml
deploy optel operator
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
create optel collector object
create instrumentation object
using https://github.com/netobserv/network-observability-operator/pull/500 PR to hack the netobserv operator and enable auto instrumentation for now we need to set
OTEL_EXPORTER_OTLP_ENDPOINT
manually to match optel svcIP then compilemake image-build
thenmake image-push
then deploy operatorUSER=username VERSION="main-amd64" make deploy
create netobserv flow collector
oc create -f config/samples/flows_v1beta2_flowcollector.yaml
we should see netobserv agent pods now running with two containers with new one as sidecar for instrumentation
Expected behavior
I was expected to instrumentation container to find the app binary and start emitting some form of metrics but I am getting
Additional context
Used instructions doc here https://opentelemetry.io/docs/kubernetes/operator/automatic/