AtomLinter / linter-eslint-node

ESLint plugin for Atom/Pulsar Linter (v8 and above)
https://web.pulsar-edit.dev/packages/linter-eslint-node
MIT License
4 stars 3 forks source link

Bug: Node process sometimes gets stuck at 100% CPU if project closed whilst Atom is opening #16

Closed Standard8 closed 1 year ago

Standard8 commented 2 years ago

Issue Type

Bug

Issue Description

STR

  1. Configure your Atom to have two projects/windows loaded. One of these should be from a large repository, e.g. gecko-dev.
  2. Shutdown Atom
  3. Start Atom, as it starts close the window for the large repository (there's probably a bit of leeway in timing, but I did have to have a couple of attempts to reproduce).

Results: Node process gets stuck at 100% CPU, never shuts down.

Bug Checklist

Linter Eslint Node: Debug output here
Atom version: 1.60.0
linter-eslint-node version: 1.0.3
Worker using Node at path: /usr/local/bin/node
Worker Node version: v17.2.0
Worker PID: 2958
ESLint version: 8.9.0
ESLint location: /Users/mark/dev/thunderbird-conversations/node_modules/eslint/
Linting in this project performed by: linter-eslint-node
Hours since last Atom restart: 0.1
Platform: darwin
Current file's scopes: [
  "source.js"
]
linter-eslint-node configuration: {
  "disabling": {
    "disableWhenNoEslintConfig": true,
    "rulesToSilenceWhileTyping": []
  },
  "scopes": [
    "source.js",
    "source.jsx",
    "source.js.jsx",
    "source.flow",
    "source.babel",
    "source.js-semantic",
    "source.ts"
  ],
  "nodeBin": "node",
  "warnAboutOldEslint": true,
  "autofix": {
    "fixOnSave": false,
    "rulesToDisableWhileFixing": [],
    "ignoreFixableRulesWhileTyping": false
  },
  "advanced": {
    "disableEslintIgnore": false,
    "showRuleIdInMessage": true,
    "useCache": true
  }
}
UziTech commented 2 years ago

I tried this on windows and node v16 and am not able to reproduce it. What makes you think it is an issue with this package?

Standard8 commented 2 years ago

I tried this on windows and node v16 and am not able to reproduce it. What makes you think it is an issue with this package?

Sorry, I forgot to mention, the process that is spinning the CPU is node (linter-eslint-node worker nnnn).

This has also only happened since I have had this package installed.

savetheclocktower commented 2 years ago

I've seen this happen on macOS. I'll keep this one open until I can repro myself. Thanks!

nelson6e65 commented 2 years ago

My editor just freezes when I open my .eslintrc.json. CLI works fine.

Michal27 commented 2 years ago

I have similar issue with this package. I have 5 repositories permanently opened in Atom and when Atom starts with linter-eslint-node package installed, it starts to melt my CPU. I have powerfull setup with Intel Core i7 (6 cores 12 threads) and even my Chrome starts to be laggy with this package running.

It happes every time I start Atom with this package. When I disable linter-eslint-node and restart Atom, everything is in normal.

Here is my output of Linter Eslint Node: Debug

Atom version: 1.60.0
linter-eslint-node version: 1.0.5
Worker using Node at path: /usr/bin/node
Worker Node version: v14.19.3
Worker PID: 63706
ESLint version: 8.13.0
ESLint location: /home/michal/prg/volby-widget/node_modules/eslint/
Linting in this project performed by: linter-eslint-node
Hours since last Atom restart: 0.1
Platform: linux
Current file's scopes: [
  "source.js.jsx",
  "meta.function.js",
  "meta.brace.curly.js"
]
linter-eslint-node configuration: {
  "advanced": {
    "showRuleIdInMessage": false,
    "useCache": false,
    "disableEslintIgnore": false
  },
  "disabling": {
    "disableWhenNoEslintConfig": false,
    "rulesToSilenceWhileTyping": []
  },
  "warnAboutOldEslint": false,
  "scopes": [
    "source.js",
    "source.jsx",
    "source.js.jsx",
    "source.flow",
    "source.babel",
    "source.js-semantic",
    "source.ts"
  ],
  "nodeBin": "node",
  "autofix": {
    "fixOnSave": false,
    "rulesToDisableWhileFixing": [],
    "ignoreFixableRulesWhileTyping": false
  }
}
savetheclocktower commented 1 year ago

Exactly one year after this issue was filed, I've finally been able to notice a worker process using 100% CPU on my own machine. I attached a debugger and I think I figured out what's happening:

This section in worker.js is not the wisest thing I've ever written. The purpose of it was to try to prevent anything from being written to stderr that was not JSON, since newline-delimited JSON is how the worker communicates with the package, such that even exceptions need to be represented as JSON.

In my case, I don't know what the originating error was, but emitError itself was throwing an exception over and over because process.stderr couldn't be written to, because the socket was already closed. That exception was itself firing uncaughtException and… well, causing an infinite loop, but the dumb async kind where the maximum call stack size never gets exceeded.

I now understand that uncaughtException is fine to use in the way I intended as long as I exit the worker process after catching the exception. My goal was never to swallow all errors and try to make the worker soldier on — just to make the final error message understandable to JobManager. So I'll experiment with a couple of different ways of fixing this. In general, I think JobManager should be better at recognizing when workers fail and discerning whether those failures are worth reporting to the user — i.e., is the same thing happening over and over, or did a worker fail as a fluke one-off thing?

Anyway, if I go a week or two without another occurrence, I'll assume that my fix has worked and push out an update.