Open Xhale1 opened 1 month ago
Hi @Xhale1, thanks for filing this.
I can confirm this is indeed not working, thank you for the reproduction. We're going to look into this.
One thing to note: The reason you're not seeing the issue when tracesSampleRate
is not defined is because we only show this warning when tracing is enabled, but it doesn't mean the app was successfully instrumented.
So definitely some issue with using nodeProfilingIntegration
.
cc @JonasBa
As a workaround, you can leave out the integration inside instrument.ts
and set it up inside index.ts
import * as Sentry from "@sentry/node";
import { nodeProfilingIntegration } from "@sentry/profiling-node";
import Fastify from "fastify";
const fastify = Fastify();
Sentry.setupFastifyErrorHandler(fastify);
Sentry.addIntegration(nodeProfilingIntegration())
@andreiborza @Xhale1 are you seeing any profiling SDK logs if you set debug: true to the SDK init?
@JonasBa With node profiling (reports as not-instrumented)
Sentry Logger [log]: Initializing Sentry: process: 65271, thread: main.
Sentry Logger [log]: Integration installed: InboundFilters
Sentry Logger [log]: Integration installed: FunctionToString
Sentry Logger [log]: Integration installed: LinkedErrors
Sentry Logger [log]: Integration installed: RequestData
Sentry Logger [log]: Integration installed: Console
Sentry Logger [log]: Integration installed: Http
Sentry Logger [log]: Integration installed: NodeFetch
Sentry Logger [log]: Integration installed: OnUncaughtException
Sentry Logger [log]: Integration installed: OnUnhandledRejection
Sentry Logger [log]: Integration installed: ContextLines
Sentry Logger [log]: Integration installed: LocalVariablesAsync
Sentry Logger [log]: Integration installed: Context
Sentry Logger [log]: Integration installed: Modules
Sentry Logger [log]: Integration installed: Express
Sentry Logger [log]: Integration installed: Fastify
Sentry Logger [log]: Integration installed: Graphql
Sentry Logger [log]: Integration installed: Mongo
Sentry Logger [log]: Integration installed: Mongoose
Sentry Logger [log]: Integration installed: Mysql
Sentry Logger [log]: Integration installed: Mysql2
Sentry Logger [log]: Integration installed: Redis
Sentry Logger [log]: Integration installed: Postgres
Sentry Logger [log]: Integration installed: Nest
Sentry Logger [log]: Integration installed: Hapi
Sentry Logger [log]: Integration installed: Koa
Sentry Logger [log]: Integration installed: Connect
Sentry Logger [log]: Integration installed: GenericPool
Sentry Logger [log]: Integration installed: Kafka
Sentry Logger [log]: [Profiling] Profiling integration setup.
Sentry Logger [log]: [Profiling] Continuous profiler mode enabled.
Sentry Logger [log]: Integration installed: ProfilingIntegration
Sentry Logger [log]: Running in CommonJS mode.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for diag v1.9.0.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for trace v1.9.0.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for context v1.9.0.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for propagation v1.9.0.
Sentry Logger [debug]: @opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'http' }
Sentry Logger [debug]: @opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'https' }
[Sentry] fastify is not instrumented. This is likely because you required/imported fastify before calling `Sentry.init()`.
Server listening at http://0.0.0.0:3000
Without node profiling (reports as instrumented as far as I'm aware
Sentry Logger [log]: Initializing Sentry: process: 65327, thread: main.
Sentry Logger [log]: Integration installed: InboundFilters
Sentry Logger [log]: Integration installed: FunctionToString
Sentry Logger [log]: Integration installed: LinkedErrors
Sentry Logger [log]: Integration installed: RequestData
Sentry Logger [log]: Integration installed: Console
Sentry Logger [log]: Integration installed: Http
Sentry Logger [log]: Integration installed: NodeFetch
Sentry Logger [log]: Integration installed: OnUncaughtException
Sentry Logger [log]: Integration installed: OnUnhandledRejection
Sentry Logger [log]: Integration installed: ContextLines
Sentry Logger [log]: Integration installed: LocalVariablesAsync
Sentry Logger [log]: Integration installed: Context
Sentry Logger [log]: Integration installed: Express
Sentry Logger [log]: Integration installed: Fastify
Sentry Logger [log]: Integration installed: Graphql
Sentry Logger [log]: Integration installed: Mongo
Sentry Logger [log]: Integration installed: Mongoose
Sentry Logger [log]: Integration installed: Mysql
Sentry Logger [log]: Integration installed: Mysql2
Sentry Logger [log]: Integration installed: Redis
Sentry Logger [log]: Integration installed: Postgres
Sentry Logger [log]: Integration installed: Nest
Sentry Logger [log]: Integration installed: Hapi
Sentry Logger [log]: Integration installed: Koa
Sentry Logger [log]: Integration installed: Connect
Sentry Logger [log]: Integration installed: GenericPool
Sentry Logger [log]: Integration installed: Kafka
Sentry Logger [log]: Running in ESM mode.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for diag v1.9.0.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for trace v1.9.0.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for context v1.9.0.
Sentry Logger [debug]: @opentelemetry/api: Registered a global for propagation v1.9.0.
Sentry Logger [debug]: @opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'http' }
Sentry Logger [debug]: @opentelemetry/instrumentation-http Applying instrumentation patch for nodejs core module on require hook { module: 'https' }
Sentry Logger [debug]: @opentelemetry/instrumentation-fastify Applying instrumentation patch for module on require hook {
module: 'fastify',
version: '4.28.1',
baseDir: '/Users/[...]/Code/sentry-openai/node_modules/.pnpm/fastify@4.28.1/node_modules/fastify'
}
Sentry Logger [debug]: @opentelemetry/instrumentation-fastify Patching fastify preHandler function
Sentry Logger [debug]: @opentelemetry/instrumentation-fastify Patching fastify server.addHook function
Sentry Logger [debug]: @opentelemetry/instrumentation-fastify Patching fastify route.handler function
Sentry Logger [debug]: @opentelemetry/instrumentation-fastify Patching fastify route.handler function
Server listening at http://0.0.0.0:3000
It seems that node profiling might force the sdk to run in commonjs mode instead of esm?
@Xhale1 are you setting profilesSampleRate as well, or is that not defined? I can see the SDK enables continuous profiling mode, which should only be enabled if profilesSampleRate and profilesSampler are not set
I was not, but I tried adding profilesSampleRate: 1
to my sdk init and it seems that did not resolve the issue.
Also, I edited my previous reply with additional logs for with-error and without-error
Let me look at the repo. If you setProfilesSampleRate, then the continuous profiling log should not appear, if it does, then that is an SDK bug. I'm going to investigate
Ok, so if I set profilesSampleRate to > 0 and curl the endpoint (curl localhost:3000), I can see that the profiler is started and stopped.
// instrument.ts
Sentry.init({
dsn: "",
debug: true,
integrations: [nodeProfilingIntegration()],
tracesSampleRate: 1,
profilesSampleRate: 1,
});
Server logs
Server listening at http://0.0.0.0:3000
Sentry Logger [log]: Flushing client reports based on interval.
Sentry Logger [log]: Flushing outcomes...
Sentry Logger [log]: No outcomes to send
Sentry Logger [debug]: @opentelemetry/instrumentation-http http instrumentat
ion incomingRequest
Sentry Logger [log]: [Tracing] Starting sampled root span
op: < unknown op >
name: GET
ID: aacdcc98567c0451
Sentry Logger [log]: [Profiling] started profiling transaction: GET
(node:78766) [FSTDEP017] DeprecationWarning: You are accessing the deprecate
d "request.routerPath" property. Use "request.routeOptions.url" instead. Pro
perty "req.routerPath" will be removed in `fastify@5`.
(Use `node --trace-deprecation ...` to show where the warning was created)
Sentry Logger [log]: [Tracing] Finishing "< unknown op >" root span "GET" wi
th ID aacdcc98567c0451
Sentry Logger [log]: [Profiling] stopped profiling of transaction: GET
Sentry Logger [log]: SpanExporter exported 1 spans, 0 unsent spans remaining
Sentry Logger [log]: [Profiling] max profile duration elapsed, stopping prof
iling for: GET
Sentry Logger [log]: [Profiling] stopped profiling of transaction: GET
Sentry Logger [log]: [Profiling] profiler returned null profile for: GET thi
s may indicate an overlapping span or a call to stopProfiling with a profile
title that was never started
Sentry Logger [log]: Flushing client reports based on interval.
Sentry Logger [log]: Flushing outcomes...
Did you try sending a request to the server to see if the profiler is started and stopped?
What is happening here in my setup is that the profile is being filtered out by the client (null return log) and not sent to the server, as we require a minimum of 2 samples at ~100hz sampling rate (aka the handler should take 20ms+)
I have not tried to verify if profiling or tracing are working as expected or not
Are you saying that the [Sentry] fastify is not instrumented...
log is misleading in this case and that the server is being properly instrumented as expected?
Hm, I don't think it is properly instrumented if you don't see @opentelemetry/instrumentation-fastify Applying instrumentation patch for module on require hook
this line.
@Xhale1 That seems to be the case yes. I dont know exactly what the check relies on, maybe its, because there are no routes defined in the file?
I should clarify, I have not confirmed myself whether the warning message is accurate or not.
@JonasBa I experienced this warning message in our internal codebase at trainwell which has a number of routes defined so I don't think that's the case. I did not define routes in my repo to keep things as minimal as possible.
As far as I'm aware my repo follows the Sentry docs for getting started with Fastify closely with the biggest difference being the use of tsup
. Are you able to comment on whether my repo seems to be setup appropriately according to the docs?
@Xhale1 I doubt that node profiling is the issue here tbh, are there any other differences? I can see from the example you shared above that the the successfully instrumented code runs in ESM and not in CJ like the one above it that reports faulty instrumentation.
@andreiborza Any ideas if tsup causes any issues with our instrumentation?
@Xhale1 hm could be an issue with tsup
. I just tried using tsx
instead
// package.json
...
scipts: {
"start": "NODE_OPTIONS='--import ./dist/instrument.js' tsx src/index.ts",
...
}
and got successful instrumentation with nodeProfiling
. Don't really see why this is the case though, needs further investigation.
Are you able to switch away from tsup
for the time being or alternatively add the profiling integration at a later stage as I described before?
Chiming in here as this is not working for us either.
Not working: "start": "NODE_OPTIONS='--import ./dist/instrument.mjs' node -r source-map-support ./dist/src/index.js" nor "start": "node --import ./dist/instrument.mjs -r source-map-support ./dist/src/index.js"
When we enable debug mode, we get this from the logs:
[Sentry] fastify is not instrumented. Please make sure to initialize Sentry in a separate file that you --import
when running node, see: https://docs.sentry.io/platforms/javascript/guides/fastify/install/esm/.
We compile with tsc (module: "ESNext").
@ollebergkvist thanks for reporting. Do you have a reproduction repo to investigate this?
Just wanted to chime in that I'm seeing the same on a CJS lambda (so it's not just ESM). And I did make sure to have sentry as the very first import in my file. it's all one big .js bundle at the end, not sure if it's related to that.
Thanks @spaceemotion. Did you try without the profiling integration? Do you still see the warning then?
yes. the integration was not bundled and the .node file also not present - and i still got the info.
@spaceemotion hm, could you provide some more info about your setup? Sounds a bit like a different issue.
You can't just bundle in the profiling because of the profiles. The steps to get that to work are a bit more involved, you can see them here.
We have a task in our backlog to create an profiling layer, but it might take a while before we can tackle that.
@andreiborza I modified my repo to use tsc
instead of tsup
and the issue persists. The transpired files in dist
all look ok to me. Do you know why the sentry debug logs might report that it's Running in CommonJS mode
when it was run in ESM, but properly report ESM when node profile is excluded?
@Xhale1 that's probably because node profiling has a require statement somewhere.
cc @JonasBa do you happen to know?
@andreiborza @Xhale1 unfortunately yes, and I dont think there is much we can do about it right now. The reason why profiling uses require (via createRequire) in ESm is because .node files are not supported in ESM, so we need to use createRequire to actually be able to import them. If we dont do that, then node throws and unknown file extension error at runtime.
@JonasBa Gotcha. Does that mean that using node profiling in combination with ESM is not supported, or that when you attempt to run an app that uses node profiling with ESM that the entire thing will actually run in CJS? Or is it just Sentry instrumentation that will run in CJS the rest of the app runs in ESM as expected?
@Xhale1 afaik there is nothing wrong with using ESM and createRequire like we do here, meaning your program should still run in ESM, except when it comes to requiring the native bindings. Reading the documentation, there is no mention about this triggering the different module resolution for your entire app.
Looking at our app, the check to detect if we are running in cjs or esm is the following
export function isCjs(): boolean {
return typeof require !== 'undefined';
}
Since we do need to create a require function via createRequire so that we can load the bindings, this means the check might wrongly log that we are running in CJS even when we are actually running in ESM (its just that require exists because profiling needs it).
I'm going to see if there is a better way to do this, either rename the require call, or see if we can rely on some other check that does not rely on require being globally available. I'm going to do some validation, but I think everything works fine here and the check is wrongly reporting cjs.
Thank you for the details! I appreciate you're time here and you looking into improving that debug message for better clarity. I think the only the only remaining mystery is why sentry reports fastify not being instrumented when node profiling is included?
As a heads up: I'm no longer trying to include node profiling in our codebase so please don't feel any undue rush from me.
Thank you for the details! I appreciate you're time here and you looking into improving that debug message for better clarity. I think the only the only remaining mystery is why sentry reports fastify not being instrumented when node profiling is included?
As a heads up: I'm no longer trying to include node profiling in our codebase so please don't feel any undue rush from me.
That I sadly dont know yet. I dont know if that check is used for something else besides that log line and if it possibly causes us to bail out of instrumentation here.
As a heads up: I'm no longer trying to include node profiling in our codebase so please don't feel any undue rush from me.
Is this because of this issue, or is there some other problem that this is causing? I've tried using it and could see my route handlers being profiled with profiles being ingested in Sentry, so I dont think there would be any blockers here.
I believe we have the same issue with fastify server, we get the
[Sentry] fastify is not instrumented. This is likely because you required/imported fastify before calling
Sentry.init().
although Sentry is loaded and imported before fastify
Hey everyone, I've made a change to scope require creation to the binding import here so that we dont end up tripping cjs detection.
@JonasBa
Hey everyone, I've made a change to scope require creation to the binding import https://github.com/getsentry/sentry-javascript/pull/13834 so that we dont end up tripping cjs detection.
Thank you! That's a welcomed fix, should hopefully clear up some confusion.
Is this because of this issue, or is there some other problem that this is causing?
Truthfully we're not moving forward with sentry node profiling due to a lack of confidence in it. Seeing warnings being reported after setting up a minimal repo that (I think?) follows the getting started documentation makes me worried about how thoroughly it's been tested.
To be clear, I appreciate yours and your team's dedication, effort, and time towards sentry v8 and helping to triage issues in the open!
Hey everyone, I've made a change to scope require creation to the binding import #13834 so that we dont end up tripping cjs detection.
Thank you! That's a welcomed fix, should hopefully clear up some confusion.
Is this because of this issue, or is there some other problem that this is causing?
Truthfully we're not moving forward with sentry node profiling due to a lack of confidence in it. Seeing warnings being reported after setting up a minimal repo that (I think?) follows the getting started documentation makes me worried about how thoroughly it's been tested.
To be clear, I appreciate yours and your team's dedication, effort, and time towards sentry v8 and helping to triage issues in the open!
Sounds good, I appreciate the honesty :) Unfortunately, I simply just wasn't aware of the CJS check in our node package (it makes no difference for profiling), we only test that the package runs fine in both environments.
Sounds good, I appreciate the honesty :) Unfortunately, I simply just wasn't aware of the CJS check in our node package (it makes no difference for profiling), we only test that the package runs fine in both environments.
Do you happen to know if the improved CJS check logic will fix the [Sentry] fastify is not instrumented. This is likely because you required/imported fastify before calling
message being reported when node profiling is included?
My apologies if that was already addressed, it's totally possible I missed it.
@Xhale1 Yes, it will. Right now what is happening is that your code still runs in ESM, however since profiling creates a require function and stores it under global scope, that check ends up returning true and generating a false positive message.
I tested on a local fastify server, and even though Sentry was saying my code is not instrumented, after I fired a few requests, I could see the data was coming into Sentry, confirming that it was a false positive
Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node
SDK Version
8.30.0
Framework Version
nodejs 20.17.0
Link to Sentry event
No response
Reproduction Example/SDK Setup
I made a repo here to demo the issue: https://github.com/Xhale1/sentry-repro
Steps to Reproduce
integrations: [nodeProfilingIntegration()]
andtracesSampleRate: 1
are set. Removing one or the other seems to make the error go away.Expected Result
Fastify is instrumented properly.
Actual Result
Sentry prints the following to the console: