Closed Janpot closed 2 years ago
Do we currently also generate sourcemaps when running out tests? Wondering if we could easily test the assumption about sourcemaps impact by opening PR where we disable sourcemaps generation and comparing with the one that has it enabled? 🤔
Maybe, but the sourcemaps are an integral part of the Sentry setup. I just mentioned it as I've seen big memory usage differences with different webpack devtool
settings
Let's not upload sourcemaps automatically and do it manually instead, should be ok for us to do it. Hopefully that will help.
I believe it'll be sourcemap generation that will be heavy, not the uploading. And we'll need to generate good sourcemaps for sentry, regardless of whether we're uploading them
I believe it'll be sourcemap generation that will be heavy, not the uploading
Yep I expressed myself a bit wrong, if it's Sentry that's generating them during the build and we can skip that it should be fine. Got it, so seems like we need to generate the source maps in any case now that we use Sentry...
Sentry should not be generating sourcemaps, it should only upload them, our build is suppose to generate sourcemaps, so that would mean disabling sourcemaps for our build
Sentry should not be generating sourcemaps, it should only upload them, our build is suppose to generate sourcemaps, so that would mean disabling sourcemaps for our build
The sentry plugin wraps the next.js config and alters it to so that our build generates sourcemaps in production and then it uploads them at the end of the build.
I think what we could do is:
productionBrowserSourceMaps
in the next.js config to make sure sourcemaps are generated in our build (sentry plugin should do it under the hood as well but let's not rely on that).hiddenSourceMaps
to false
in the Sentry plugin to make sure our sources contain a sourceMappingURL
directive. We want this so that we can debug in production, and so that the Sentry backend can locate them as welldryRun
to true
to avoid Sentry from uploading the sourcemaps to their backend after the build
Current behavior 😯
Regularly seeing
test_static
circleci jobs fail because of OOM issues.example: https://app.circleci.com/pipelines/github/mui/mui-toolpad/3578/workflows/b14c82c8-b004-45c1-b1ca-5a30412b6154/jobs/13481
Insights confirmed an increase in max memory consumption since Sept 30
Might be caused by https://github.com/mui/mui-toolpad/pull/1043. I'm putting my money on the source map changes 🙂
We were already quite close to 100%, so if we can't bring memory consumption down, scaling the instance may be in order.