getsentry / sentry-javascript

Official Sentry SDKs for JavaScript
https://sentry.io
MIT License
7.9k stars 1.56k forks source link

Potential infinite loop with canvas recording web worker #13743

Open billyvg opened 1 week ago

billyvg commented 1 week ago

Seeing a potential cycle with canvas replay web worker. Stack trace looks something like this:

sendBufferedReplayOrFlush
startRecording
...
getCanvasMananger
new CanvasManager
initFPSWorker
new window.Worker

Then the part that seems to cycle:

sendBufferedReplayOrFlush
stopRecording
_stopRecording
...
???.reset (mutation buffer i think?)
CanvasManager.reset
initFPSWorker

Customer says this generally happens after a user comes back to the tab from a long period of idling.

Zendesk ticket

trogau commented 1 week ago

Hi folks, I reported this issue via support & they advised to come here. Happy to provide more information as needed.

To explain in a bit more detail:

We have an internal application built on node.js with Vue.js. We are running the Vue Sentry package, version 8.25.0 (which we realise is a couple versions behind current).

This issue was reported by our internal users who are having tabs in Chrome (latest Chrome version, running on Chromebooks - Intel i5s with 8GB of RAM) freeze when performing certain actions. Some of these actions are triggering console errors, which might be contributing to the behaviour, but I'm not sure about this.

When looking at a frozen tab, there's not much we can diagnose - devtools is locked up. We can see in Chrome Task Manager that there are many, many dedicated workers running under the frozen Chrome process, and memory usage seems significantly higher than normal.

The tab will remain frozen, with periodic dialogs from Chrome asking if we want to wait or exit. I think waiting does nothing but simply try to spin up more dedicated workers, though it's hard to tell because the machine is a bit unwieldy by this point and there are so many it is hard to see what is going on - the only recovery is to close the tab.

We made a little Chrome extension to override the 'new Worker' class just to see if we could identify the issue, and captured the stack trace when the workers were created, and it showed something like the following, repeated over and over again:

2024-09-20T16:54:21+10:00 | https://app.example.com | [ip address redacted] | [16:54:20] Creating worker with script: blob:https://app.explorate.co/4899ff93-b770-4fa7-8345-bc6aaa98fa2d, Stack trace: Error
    at new <anonymous> (chrome-extension://egfiddhbdemmalmbdeockdnknmffohmg/injected.js:12:32)
    at Nrt.initFPSWorker (https://app.explorate.co/assets/index-dbcae719.js:750:9710)
    at Nrt.reset (https://app.explorate.co/assets/index-dbcae719.js:750:7943)
    at Xet.reset (https://app.explorate.co/assets/index-dbcae719.js:746:14920)
    at https://app.explorate.co/assets/index-dbcae719.js:746:27620
    at Array.forEach (<anonymous>)
    at https://app.explorate.co/assets/index-dbcae719.js:746:27607
    at https://app.explorate.co/assets/index-dbcae719.js:746:15369
    at https://app.explorate.co/assets/index-dbcae719.js:746:43554
    at Array.forEach (<anonymous>)
    at Fg._stopRecording (https://app.explorate.co/assets/index-dbcae719.js:746:43542)
    at Fg.stopRecording (https://app.explorate.co/assets/index-dbcae719.js:748:6110)
    at Fg.stop (https://app.explorate.co/assets/index-dbcae719.js:748:6417)
    at Fg._refreshSession (https://app.explorate.co/assets/index-dbcae719.js:748:9881)
    at Fg._checkSession (https://app.explorate.co/assets/index-dbcae719.js:748:9801)
    at Fg.checkAndHandleExpiredSession (https://app.explorate.co/assets/index-dbcae719.js:748:8242)
    at Fg._doChangeToForegroundTasks (https://app.explorate.co/assets/index-dbcae719.js:748:11563)
    at _handleWindowFocus (https://app.explorate.co/assets/index-dbcae719.js:748:11189)
    at r (https://app.explorate.co/assets/index-dbcae719.js:741:4773)

I was able to reproduce this on my machine by:

  1. Loading our application
  2. Moving to a different tab and/or leaving the PC for an hour or so
  3. Coming back to the application and resuming activity in the initial tab

Doing that would regularly trigger a burst of worker creation. On my more powerful laptop (i7 / 32GB) I triggered about 100 workers being created at once, though it didn't cause any noticeable performance issues.

My guess is that on the lower spec machines, when a lot of workers are created it simply crawls to a halt and then crashes, and that there is a loop or race condition that is triggering endless worker creations in the Sentry Replay code, either as a direct result of something weird in our code or just a random bug somewhere.

There are two things we have on our TODO to try here:

  1. Upgrade to the latest version of the Sentry/vue package
  2. Disable the canvas recording

Open to any other suggestions as well if it helps zero in on the issue.

billyvg commented 1 week ago

Thanks for the detailed description @trogau -- just want to clarify a few details:

trogau commented 1 week ago
  1. Sample rates are 0.05 for session and 1.0 for errors
  2. No, doesn't throw an error when a worker is created, only logs the event & sends it to our remote endpoint to catch the data.
  3. No, sorry - actually _handleWindowFocus actually only seems to show up in a couple of the most recent events when I was doing some testing yesterday. The below one is more representative of what we're seeing:
2024-09-16T14:51:29+10:00 | https://app.example.com | [ip address redacted] | [14:51:28] Creating worker with script: blob:https://app.explorate.co/ae32033e-b5b8-4299-acf6-6173dde42e7f, Stack trace: Error
    at new window.Worker (chrome-extension://mfenbcgblaedimllfnpabdkgcbggfcml/injected.js:11:32)
    at Nrt.initFPSWorker (https://app.explorate.co/assets/index-da618cdf.js:750:9710)
    at Nrt.reset (https://app.explorate.co/assets/index-da618cdf.js:750:7943)
    at Xet.reset (https://app.explorate.co/assets/index-da618cdf.js:746:14920)
    at https://app.explorate.co/assets/index-da618cdf.js:746:27620
    at Array.forEach (<anonymous>)
    at https://app.explorate.co/assets/index-da618cdf.js:746:27607
    at https://app.explorate.co/assets/index-da618cdf.js:746:15369
    at https://app.explorate.co/assets/index-da618cdf.js:746:43554
    at Array.forEach (<anonymous>)
    at Fg._stopRecording (https://app.explorate.co/assets/index-da618cdf.js:746:43542)
    at Fg.stopRecording (https://app.explorate.co/assets/index-da618cdf.js:748:6110)
    at Fg.stop (https://app.explorate.co/assets/index-da618cdf.js:748:6417)
    at Fg._runFlush (https://app.explorate.co/assets/index-da618cdf.js:748:13600)

I should note: I have not yet actually captured a stack trace from an actual crash; we haven't had one for a few days where the extension was actually running and logging data. The events we've been capturing so far - which again show up to around ~100 workers getting created, which doesn't seem like enough to cause a crash even on the Chromebooks - are happening relatively frequently though.

trogau commented 1 week ago

We captured a stack trace from a freeze this morning & seems to confirm it is mass creation of workers that causes the problem. Attached is a log snippet showing about 1008 workers created in ~3 seconds, which froze the browser tab. Not sure how helpful it is but just thought I'd include it for reference.

log.txt

chargome commented 1 week ago

@trogau thanks for the insights, could you also specify which tasks you are running on the canvas? Is it like a continuous animation or a static canvas – this might help reproducing the issue.

trogau commented 1 week ago

@trogau thanks for the insights, could you also specify which tasks you are running on the canvas? Is it like a continuous animation or a static canvas – this might help reproducing the issue.

@chargome : I'm double checking with our team but AFAIK the pages where we're seeing this happen do not have any canvas elements at all. We do have /some/ pages with canvas (a MapBox map component) but this isn't loaded on the page where we're seeing the majority of these issues.

We do have Sentry.replayCanvasIntegration() being set in our Sentry.init() though.

trogau commented 1 week ago

FYI we've upgraded to v8.31.0 and still seeing large numbers of workers created (just had one instance of 730 created in a few seconds - not enough for it to crash the tab so the user didn't notice but we see it in the logging. The magic number seems to be about 1000 workers being enough to freeze the tab on these devices.

billyvg commented 8 hours ago

@trogau Thanks for your help, I believe I've identified the issue here: https://github.com/getsentry/sentry-javascript/issues/13855 -- can you actually try downgrading to 8.25.0 to see if that's affected?