Closed mikan3rd closed 3 weeks ago
Hey,
do you actually need to manually continue the trace at all in v8? I would have thought that this should be automatically continued anyhow?
Looking at the issue you linked, I do see the span you started there connected to the frontend - If you click "View full trace" at the top of the issue, the trace contains a http.client
span which has a child gql
span. Is this not what you would expect? This seems correct to me! Or what would you expect differently?
Generally speaking, could you share your Sentry.init()
code, and possibly the output when you configure debug: true
? I think GraphQL/Apollo should be auto instrumented, you shouldn't really have to do anything yourself to get spans for this, I believe! Maybe there is a deeper setup issue there 🤔
Sorry, I didn't provide enough links to refer to. See the following frontend and backend Issue. "View full trace" shows that the trace ID allocated on the frontend side is not carried over on the backend side.
The following is the log
Can you show us your Sentry.init
call in your Apollo server? There are some suspicious logs that might hint some config problem:
Recording is off, propagating context in a non-recording span
I wonder why this is printed at all, given we're starting spans afterwards
Finishing "gql" root span "UserDashboardTemplateLatestLead" with ID c94eaf3187b58549
So this suggests to me that a span was actually started and sent but it's not connected to the error that was thrown in between.
Also, would you mind letting us know why you need to call continueTrace
at all? This shouldn't be necessary if you initialize the SDK correctly.
The following is Sentry.init
.
@mikan3rd Did you try running your Apollo server without manually continuing the trace, or is there a specific reason for the code inside your context
function?
I removed the code and the code about continueTrace, but the frontend and backend trace IDs remained different.
Could you paste the logs from application startup? The ones that show e.g. "Integration added...." plus everything until the app has settled/fully started up?
The following is the logs from application startup.
@mikan3rd looking at your Sentry.init
code above and given that I don't see any logs about opentelemetry wrapping graphql
(or related libraries): Can you confirm that you call Sentry.init
before you require any other module? This is crucial to work for all our instrumentation, except for http(s)
. More information here.
Thanks for the reply.
I have managed to get trace to work by putting Sentry.init
before all imports.
However, I am concerned that I am getting the following error after starting the server.
What is the problem?
Sentry Logger [debug]: @opentelemetry/instrumentation-http outgoingRequest on request error() Error: socket hang up
at Socket.socketCloseListener (node:_http_client:473:25)
at Socket.emit (node:events:531:35)
at TCP.<anonymous> (node:net:339:12)
at TCP.callbackTrampoline (node:internal/async_hooks:130:17) {
code: ‘ECONNRESET’
}
Hello @mikan3rd,
We are currently on company-wide hackweek and thus on limited support. We'll take a look at this next week.
@mikan3rd I believe this error can just be ignored. It may be something that your application swallowed up until now but since the SDK hooks in at quite a low level it surfaces it through a debug log. We don't have reason to believe the SDK causes a socket hangup.
My problem seems to be solved, thank you very much!
Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node
SDK Version
8.20.0
Framework Version
No response
Link to Sentry event
https://bizforward.sentry.io/issues/5659614696/events/e33fe93c614b41779919285369087372/
Reproduction Example/SDK Setup
No response
Steps to Reproduce
In Sentry v7, continueTrace is used as follows, and checking the Sentry Issue confirms that the traceId used in the frontend is successfully passed on to the backend.
In Sentry v8, the transactionContext argument of the continueTrace callback has been removed, so I changed the code as follows. However, when I did so, the traceId displayed in Sentry's Issue was different for frontend and backend.
I would like to know the cause and countermeasures.
Do I have to use startSpan, and if I use ApolloServer, how do I use startSpan?
Expected Result
The trace ID in the Issue must match the sentryTrace in the request header.
Actual Result
The trace ID listed in the Issue does not match the sentryTrace in the request header.