Open mcmxcdev opened 5 months ago
integrations should use utils package as usual. Were you able to replicate this locally? Does clearing the vercel caching work?
We reverted to CaptureConsole
class usage for the time being, so I cannot easily reproduce it right now.
I tested in Vercel myself with basic SvelteKit app and couldn't reproduce, so going to close this for now, we'll need a reproduction to dig in further.
If anyone else experiences this please reach out, we can re-open the issue and take a look. Thanks!
I’m also hitting this with a sveltekit app — fails consistently on every second or third deployment, not reproducible locally. Same error, that @sentry/utils can’t be found, using a pnpm monorepo with sentry in a shared package if that helps. Also did nothing other than updating packages.
Completely breaks our app in production, and should be reopened. I’ve had to remove all traces of Sentry in the meantime.
@madeleineostoja please share your sveltekit, vite and sdk version so we can try and reproduce it!
fails consistently on every second or third deployment, not reproducible locally
This makes me feel like it's a caching issue, either because of vercel or pnpm or both. If you run pnpm why @sentry/utils
in your repo, what does it return?
Versions:
2.4.3
5.0.12
7.95.0
My full dependency graph for @sentry/utils
is a bit of a monster because of how much it's used throughout my monorepo, but if it's helpful here's the full output (@bokeh/:package
are all internal monorepo packages — @bokeh/utils
exports the sanity
client and base config to be used by packages, @bokeh/config
exports base vite config which includes the sanity plugin)
pnpm -r why @sentry/utils
Worth noting that @sentry/sveltekit
is a direct dependency of each app (bokeh/apps/app
, bokeh/apps/portfolio
, etc), but the above output shows @sentry/utils
pulled in from the shared monorepo packages its used in instead. Unsure if that could cause any issues.
This makes me feel like it's a caching issue, either because of vercel or pnpm or both
My thoughts too, though pnpm's dep graph should be very deterministic, so it's strange I can't reproduce locally
This seems to have self resolved for me for now by deleting and regenerating my pnpm lockfile. Still weird and still probably warrants some more investigation, but unfortunately I don’t have time to dive much deeper at the moment
EDIT: I lied, issue has resurfaced. Again I changed nothing to do with sentry, and again redeploying the same build can fix it, suggesting a caching issue with Sentry + Vercel.
Also worth noting I use Turborepo with Vercel's remote caching, and forcing a redeployment busts this remote cache. I'd suspect something going on there, but I haven't had time to nail down a consistent repro, especially since this is effecting the uptime of our production app, I just need it to work and can't be experimenting.
Package caching is unfortunately not something we can influence from within our packages. All we can do is declare our dependencies carefully and correctly (which I believe we do). Please check the behaviour of your build tooling and let us know if you have any concrete suspicions that we are doing something wrong!
Only suspicion is that sentry is the only package this happens for out of the dozens and dozens I have installed, and it seems I’m not the only one.
Happy to just leave sentry out of my stack, it’s not the first production breaking bug I’ve had with these SDKs
@madeleineostoja Totally fair. I'd personally rather blame pnpm than Sentry here but your technical decisions are up to you.
Our workaround for the time being was to disable turborepo cache which seems somehow related.
Now we have upgraded from 7.98.0 to 7.105.0 and still encounter this after re-enabling turbo cache.
I tried explicitly installing @sentry/utils
as dependency, still erroring out. Blocks us completely from using turborepo cache for quicker builds, we might consider removing Sentry from our application.
@mcmxcdev That honestly sounds like turborepo is misconfigured.
@mcmxcdev That honestly sounds like turborepo is misconfigured.
We are using the same config for months already, only started breaking when upgrading @sentry/sveltekit
to higher than 7.91.0
turbo.json
{
"$schema": "https://turbo.build/schema.json",
"globalEnv": ["VERCEL", "ANALYZE", "NODE_ENV"],
"remoteCache": {
"enabled": true
},
"pipeline": {
"dev": {
"cache": false,
"persistent": true
},
"build": {
"dependsOn": ["^build"],
"outputs": [".svelte-kit/**", ".vercel_build_output/**", ".vercel/**", "build/**"]
},
"preview": {
"dependsOn": ["^build"]
},
"test": {},
"test:unit": {},
"test:ui": {},
"check": {},
"lint": {},
"lint:fix": {},
"biome:check": {},
"biome:check:fix": {},
"format": {},
"format:check": {},
"storybook": {
"cache": false,
"persistent": true
},
"build-storybook": {}
}
}
I honestly don't know what the issue might be. I don't think we are doing anything weird or wrong. We need a reproduction going forward.
This is such a frustrating thread. First blaming pnpm then blaming turborepo, and in another ticket for an issue I also experienced, blaming NPMs resolution algorithm. Might be the case that weird interactions with libraries outside of Sentry's direct influence are causing issues, but you can't just palm issues that several users are reporting off to other tools when Sentry's SDKs are the common denominator amongst them.
Idk, if this was some OSS project I wouldn't get ruffled up, but these are SDKs for a for-profit platform and the lack of responsibility is baffling.
@madeleineostoja Yup I get it. Trust me, this is frustrating for me as well. I also don't like spending my time on random compatibility issues with packages that constantly cause headaches (especially pnpm). Not having reproduction also sucks - we can only make assumptions.
If I knew what the issue was I would fix it. In my mind, the most we can do to ensure a module is found is to check the dependencies of our packages. To me, they look fine. This leaves us with the other funky players in the game which are pnpm and turborepo. @mcmxcdev mentioned that disabling turborepo fixed the problem. Idk but my deduction abilities don't really point at our SDK as the offender here. Pnpm has caused massive opaque problems for us in the past, ever since it was introduced. It caches aggressively and has extremely weird resolution patterns (which we even reported upstream). Pnpm does not seem to play nicely with packages that have strict version requirements. Please excuse me that I have a tendency to blame pnpm, but historically it worked out for me.
Feel free to double-check our implementation. Also, any sort of reproduction would massively help. I haven't been able to reproduce.
I'm having the same issue too, using pnpm
& turborepo
. Manually installed @sentry/utils
but it still randomly fails.
Manually redeploying from the Vercel dashboard seem to fix the deploy. But then next time deploying it randomly happens on one of the sites in the monorepo.
Looking at Turborepo Run Summary in Vercel's dashboard, it shows cache changing.
But one side seems cut off at 100 characters, could be something is doing a comparison with a limit of 100 characters?
Could this issue be related to: https://github.com/vercel/turbo/issues/6823?
@joshnuss that looks slightly problematic but I haven't tried it out. Would you mind asking about this upstream?
@lforst Sure thing. I commented on Turborepo issue
Just started getting this in our vercel deploys as well. We are also pnpm and turbo repo. The struggle is I'm not getting any errors in the build process, just the serverless function crashing when you hit the site. I did a redeploy of the build and it worked so caching does seem to be at play.
Cannot find module '@sentry/utils' Require stack:
package.json
?
INIT_REPORT Init Duration: 665.40 ms Phase: invoke Status: error Error Type: Runtime.ExitError
Error: Runtime exited with error: exit status 1Did anyone manage to solve this? I would love to re-enable turbo remote caching again, which would save us plenty of CI time.
@smart @mcmxcdev I recommend you open a new issue in the turborepo repository. With reproduction and links to your builds. Seems like the maintainers don't check comments on old issues.
We actually just got rid of Sentry and our builds work fine again with turbo cache enabled.
This issue is solved for us bu, butwould keep it open since there are other people affected from it.
The solution for me was to use npm
in our workflows.
The project uses pnpm
and gets deployed to azure web app, my workflow builds the app and zip all the files to send it to azure.
For some reason once you zip and unzip the folder the modules can't be find.
I think it is related to the symbolic links that pnpm uses.
I can reproduce pretty easy.
pnpm install
pnpm build
zip release.zip * -r
unzip
Got a similar error for @sentry/nextjs
when running Jest (29.7.0) after upgrading from 7
to 8.3.0
→ Cannot find module '@sentry/nextjs'
.
We're using pnpm and have a turborepo setup.
@ludwighogstrom can you share more of the error? Is there a stack trace? More information? Thank you!
@ludwighogstrom can you share more of the error? Is there a stack trace? More information? Thank you!
Here is the stack trace
at Resolver._throwModNotFoundError (../../node_modules/.pnpm/jest-resolve@29.7.0/node_modules/jest-resolve/build/resolver.js:427:11)
at Object.<anonymous> (src/hooks/__tests__/useReportToSentry.test.ts:8:57)
In the test we're importing @sentry/nextjs
to be able to check if error details are being send to Sentry.
import * as Sentry from "@sentry/nextjs";
...
const captureExceptionSpy = jest.spyOn(Sentry, "captureException");
...etc
However, there are other tests that breaks in the same way where we don't import @sentry/nextjs
in the actual test and only import Sentry in the file that are being tested.
Everything seems to work fine otherwise. It's only when running Jest we have spotted the error.
Not sure if it's related, but we have followed a "guide" from MSW to solve another Cannot find module
problem - Cannot find module 'msw/node' from 'src/__mocks__/api/server.ts'
.
Stack trace
at Resolver._throwModNotFoundError (../../node_modules/.pnpm/jest-resolve@29.7.0/node_modules/jest-resolve/build/resolver.js:427:11)
at Object.<anonymous> (src/__mocks__/api/server.ts:11:15)
at Object.<anonymous> (jest.setup.js:11:17)
The guide: https://github.com/mswjs/msw/issues/1786#issuecomment-1782559851
Not sure if worth mention but in this test we have the jsdom
environment
/**
* @jest-environment jsdom
*/
The solution from the "guide"
testEnvironmentOptions: {
// This is needed to make MSW (node) work in JSDOM environment.
// See https://github.com/mswjs/msw/issues/1786#issuecomment-1786122056
// See https://mswjs.io/docs/migrations/1.x-to-2.x#cannot-find-module-mswnode-jsdom
customExportConditions: [""],
},
...could maybe affect how Sentry gets imported?
@ludwighogstrom My mental pattern matching makes me think that this is less an issue with the SDK but jest resolving (or rather your setup thereof). Since it is very hard to debug without having the setup, would you mind sharing a reproduction example?
@ludwighogstrom My mental pattern matching makes me think that this is less an issue with the SDK but jest resolving (or rather your setup thereof). Since it is very hard to debug without having the setup, would you mind sharing a reproduction example?
I see! Not sure how easy that would be... let's see if I have the time later today to look into that.
Added some more info in my comment above. Could potentially be several libraries/configurations that causes the problem (making it hard to know what to pick for a minimal reproduction 😅).
Added some more info in my comment above
Ah, we definitely use export conditions in the SDKs package.json, it's possible that that will mess up your module resolution. Unfortunately, I don't think this is something we will be able to fix from within the SDK meaning that this is something msw should change. Maybe you can somehow configure msw to be resolved in a specific way, then you can get rid of this export condition override.
Added some more info in my comment above
Ah, we definitely use export conditions in the SDKs package.json, it's possible that that will mess up your module resolution. Unfortunately, I don't think this is something we will be able to fix from within the SDK meaning that this is something msw should change. Maybe you can somehow configure msw to be resolved in a specific way, then you can get rid of this export condition override.
Thanks! Will look into that. Some comments in the MSW thread points out that their suggested solution is a bit blunt. Focusing on solving that instead of creating minimal reproduction.
Sorry for hijacking this issue :)
Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/sveltekit
SDK Version
7.98.0
Framework Version
7.98.0
Link to Sentry event
No response
SDK Setup
Steps to Reproduce
We upgraded from 7.91.0 to 7.98.0 today and encountered breaking Vercel deployments:
The stack trace clearly points to
captureconsole.js
which made it clear for me that switching fromCaptureConsole
class tocaptureConsoleIntegration
led to the breakage.Expected Result
Vercel deployments should work as normal
Actual Result
-