Closed bruno-garcia closed 2 months ago
I'm using @sentry/webpack-plugin
pinned to 2.20.1
const SENTRY_DEFAULT_ORG = 'talentkit';
//...
// The rest of webpack config
//...
sentryWebpackPlugin({
org: env.all.SENTRY_ORG || SENTRY_DEFAULT_ORG,
project: APPS_TO_SENTRY_PROJECTS[app],
authToken: env.all.SENTRY_AUTH_TOKEN,
applicationKey: 'AuthoredUp',
})
I log env before I run build, and see that SENTRY_AUTH_TOKEN
is set.
Logs from CI:
[sentry-webpack-plugin] Info: Sending telemetry data on issues and performance to Sentry. To disable telemetry, set `options.telemetry` to `false`.
node:internal/errors:984
const err = new Error(message);
^
Error: Command failed: /builds/authoredup/app/node_modules/@sentry/cli-linux-x64/bin/sentry-cli releases new d9a03d0243493db2ea820f41979a5da4dff0183b
error: API request failed
caused by: sentry reported an error: unknown error (http status: 403)
Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
Please attach the full debug log to all bug reports.
at genericNodeError (node:internal/errors:984:15)
at wrappedFn (node:internal/errors:538:14)
at ChildProcess.exithandler (node:child_process:422:12)
at ChildProcess.emit (node:events:518:28)
at maybeClose (node:internal/child_process:1105:16)
at Socket.<anonymous> (node:internal/child_process:457:11)
at Socket.emit (node:events:518:28)
at Pipe.<anonymous> (node:net:337:12) {
code: 1,
killed: false,
signal: null,
cmd: '/builds/authoredup/app/node_modules/@sentry/cli-linux-x64/bin/sentry-cli releases new d9a03d0243493db2ea820f[419](https://git.uprava.app/authoredup/app/-/jobs/5150#L419)79a5da4dff0183b'
}
Node.js v20.12.1
Timestamp at the end of logs: 2024-09-10T17:28:01.952Z
I tried adding logs, but then everything went fine:
[sentry-webpack-plugin] Info: Sending telemetry data on issues and performance to Sentry. To disable telemetry, set `options.telemetry` to `false`.
> Found 4 files
> Analyzing 4 sources
> Adding source map references
> Bundled 4 files for upload
> Bundle ID: d1178525-c303-57ce-8fd6-b83168f1a2bf
> Uploaded files to Sentry
> File upload complete (processing pending on server)
> Organization: talentkit
> Project: aup-platform
> Release: 492057a01bce9c80d01fc0f80fff29872e330618
> Dist: None
> Upload type: artifact bundle
Source Map Upload Report
Scripts
~/07091318-aa3c-4660-9087-ab9731006f93-0.js (sourcemap at 07091318-aa3c-4660-9087-ab9731006f93-0.js.map, debug id 07091318-aa3c-4660-9087-ab9731006f93)
~/b9c7abf0-c082-493f-9cd0-5afcf5349594-1.js (sourcemap at b9c7abf0-c082-493f-9cd0-5afcf5349594-1.js.map, debug id b9c7abf0-c082-493f-9cd0-5afcf5349594)
Source Maps
~/07091318-aa3c-4660-9087-ab9731006f93-0.js.map (debug id 07091318-aa3c-4660-9087-ab9731006f93)
~/b9c7abf0-c082-493f-9cd0-5afcf5349594-1.js.map (debug id b9c7abf0-c082-493f-9cd0-5afcf5349594)
[sentry-webpack-plugin] Info: Successfully uploaded source maps to Sentry
Fun.
I log env before I run build, and see that SENTRY_AUTH_TOKEN is set.
Thanks for confirming this. And sharing the timestamp. We'll take a look at our end!
While folks look at stuff on our end I dug some issues:
Company network blocked post to Sentry.io (by Zscaler) I added .sentry.io to SSL certification allowed list, problem was gone.
From this. By any chance is your build agent a self hosted one? Are you using a proxy of any sort?
Also reopened this:
@panta82 I couldn't find 403
's for your organization at that timeframe. Is it possible the request is going through some reverse proxy and failing there?
@bruno-garcia This is happening in a docker container based on debian:bullseye
, running on a dedicated on-demand VPS on Hetzner. No special proxies. And the npm install before that works, so there is internet.
I tried recreating the situation in docker container, on the same machine, but now uploads work.
And the npm install before that works, so there is internet.
The log indicates it did reach an HTTP server that returned 403, so that checks out. I was able to look at audit logs on our end that show 403
for the chunk-upload
endpoint. I can see this happening a few times, and the URL contains the org slug. But I none of the occurrences had the org slug talentkit
in them.
That's what made me wonder if somehow you're reaching a different HTTP server (or a reverse proxy).
@szokeasaurusrex could we add further logging on sentry-cli
to dump all HTTP response headers to console when this happens? I'm assuming our load balancer and HTTP servers do return some X-Sentry-Something
header to help us debug this further. If not, could we add such header?
running on a dedicated on-demand VPS on Hetzner
@panta82, unfortunately, it is a known issue that certain Hetzner IP ranges are on Google Cloud's denylist, and that Google Cloud blocks all requests from these ranges. Since Sentry is hosted on Google Cloud, the most likely explanation for these intermittent 403
failures is that they are occurring whenever your server is running on one of the IP addresses that Google Cloud bans.
Since this 403
status is being returned by Google Cloud before the request even reaches our infrastructure, there is sadly nothing that we can do on our end to get your IP unblocked. Other users who have experienced the same issue have had success with cycling their server's IP address until they find one that is not banned by Google Cloud, as Google only appears to be banning some Hetzner IP ranges, so I would suggest you try doing something like this as well.
You can also try setting the SENTRY_ALLOW_FAILURE
environment variable to 1
or true
. This should allow your CI to pass even if you get a 403
error; however, any builds where you get the 403
will not have the sourcemaps uploaded, meaning that the stack traces will not be symbolicated until you manually upload the sourcemaps.
Thanks @szokeasaurusrex , that makes sense. I'll try setting up SENTRY_ALLOW_FAILURE
.
I can't hand-pick IP ranges, because the tooling automatically leases VPS-s on demand, runs jobs, then releases them. Maybe proxy outgoing traffic somehow, but probably not worth the trouble unless the issue escalates.
I can't hand-pick IP ranges, because the tooling automatically leases VPS-s on demand
Gotcha, I was thinking this might be the case given that you are only seeing the errors intermittently. Perhaps there is some way you could manually try making an HTTP request to some Sentry endpoint at the beginning of CI pipeline when the VPS starts, and if you get a 403
, you could immediately retry with a new VPS?
In any case, using SENTRY_ALLOW_FAILURE
should at least hopefully address your original complaint from your Tweet about wanting a way to avoid having these failures block your CI.
Since we sadly cannot do anything on our end to get these Hetzner IPs unblocked, and since the SENTRY_ALLOW_FAILURE
at least should unblock your CI, I am going to close this issue for now. Feel free though to reach out with a new issue if you encounter any further difficulties; as long as it is something we are able to assist with, we are happy to help.
Description
Related to anther issue but that has many different reasons/root causes:
This issue was raised on Twitter: https://x.com/thepanta82/status/1833547631521632685 See: https://x.com/thepanta82/status/1833562947517616622