Hi all, need a hint - I'm working on a setup where two spring boot services are talking through Kafka. service A emits an event1 that is consumed by service B. Then service B emits another event2 that is later consumed by service A. Both events has set x-datadog-trace-id and I can find all logs from both services in DataDog filtering by trace_id. However, I cannot see parts of service B processing in below graph (also no spans from this service, etc) - it seems I'm missing something in my setup.
Both services are running inside Kubernetes. Service B is built using new pipelines we're building from scratch and setup and has a sidecar taking care of communication with DataDog. Java arguments to run it are -javaagent:/app/dd-java-agent.jar -Ddd.agent.port=8126 -XX:FlightRecorderOptions=stackdepth=256 -Ddd.profiling.enabled=true -Ddd.logs.injection=true -Ddd.service=service-b -Ddd.env=prod -Dcom.sun.management.jmxremote=0.0.0.0 -Dcom.sun.management.jmxremote.rmi.port=9010 -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false'
Service A is built using old pipelines but with the same Java params as above.
What I observe is that event 1 emitted from service A has the following Kafka headers:
clearly event2 is missing several headers that, AFAIK are added automatically by the DataDog agent in service A.
Also, see that x-datadog-parent-id has a different value - is that fine, or should be the same? What should I change in the service B setup to make it all work correctly together?
I'm quite new to DataDog - tried to find answers in docs and tried different settings but failing so far. Guys who set up old service A are no longer available, and we cannot find a difference in the setup now.
Hi all, need a hint - I'm working on a setup where two spring boot services are talking through Kafka.![image (3)](https://github.com/DataDog/dd-trace-java/assets/9881793/46313753-2060-4f61-98e6-9649ffd19441)
service A
emits anevent1
that is consumed byservice B
. Thenservice B
emits anotherevent2
that is later consumed byservice A
. Both events has setx-datadog-trace-id
and I can find all logs from both services in DataDog filtering bytrace_id
. However, I cannot see parts ofservice B
processing in below graph (also no spans from this service, etc) - it seems I'm missing something in my setup.Both services are running inside Kubernetes.
Service B
is built using new pipelines we're building from scratch and setup and has a sidecar taking care of communication with DataDog. Java arguments to run it are-javaagent:/app/dd-java-agent.jar -Ddd.agent.port=8126 -XX:FlightRecorderOptions=stackdepth=256 -Ddd.profiling.enabled=true -Ddd.logs.injection=true -Ddd.service=service-b -Ddd.env=prod -Dcom.sun.management.jmxremote=0.0.0.0 -Dcom.sun.management.jmxremote.rmi.port=9010 -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false'
Service A
is built using old pipelines but with the same Java params as above. What I observe is thatevent 1
emitted fromservice A
has the following Kafka headers:And
event 2
emitted fromservice B
has the following Kafka headers:clearly
event2
is missing several headers that, AFAIK are added automatically by the DataDog agent inservice A
. Also, see thatx-datadog-parent-id
has a different value - is that fine, or should be the same? What should I change in the service B setup to make it all work correctly together? I'm quite new to DataDog - tried to find answers in docs and tried different settings but failing so far. Guys who set up old service A are no longer available, and we cannot find a difference in the setup now.