Closed henke37 closed 8 years ago
You can change the limits in your kernel.
Or use graceful-fs module instead of the built in fs module.
http://stackoverflow.com/questions/8965606/node-and-error-emfile-too-many-open-files On 27 Apr 2016 6:59 AM, "henke37" notifications@github.com wrote:
Server Problem
Please confirm whether you've tried the following debugging steps:
- Run npm run build-server to regenerate lib/ from src/
- Run rm -rf node_modules && npm install to get a fresh install of dependencies
- Restarted the server
Description of the Problem
- What triggers the problem? The server putting a limit on the number of files that can be opened at the same time combined with high load.
- What happens? CyTube queues up async file operations beyond
- What do you expect to happen instead?
System Information
- Operating System: CentOS 5
Node Version: v0.10.40
CyTube Version: v3.14.2
Error Messages Displayed: [Sat Mar 05 2016 20:47:28] Error: EMFILE, too many open files '/home/cytube/cytube/templates/channel.jade' at Object.fs.openSync (fs.js:439:18) at Object.fs.readFileSync (fs.js:290:15) at sendJade (/home/cytube/cytube/lib/web/jade.js:44:34) at /home/cytube/cytube/lib/web/routes/channel.js:38:28 at Layer.handle as handle_request http:///home/cytube/cytube/node_modules/express/lib/router/layer.js:95:5 at next (/home/cytube/cytube/node_modules/express/lib/router/route.js:131:13) at Route.dispatch (/home/cytube/cytube/node_modules/express/lib/router/route.js:112:3) at Layer.handle as handle_request http:///home/cytube/cytube/node_modules/express/lib/router/layer.js:95:5 at /home/cytube/cytube/node_modules/express/lib/router/index.js:277:22 at param (/home/cytube/cytube/node_modules/express/lib/router/index.js:349:14) at param (/home/cytube/cytube/node_modules/express/lib/router/index.js:365:14) at Function.process_params (/home/cytube/cytube/node_modules/express/lib/router/index.js:410:3) at next (/home/cytube/cytube/node_modules/express/lib/router/index.js:271:10) at compression (/home/cytube/cytube/node_modules/compression/index.js:205:5) at Layer.handle as handle_request http:///home/cytube/cytube/node_modules/express/lib/router/layer.js:95:5 at trim_prefix (/home/cytube/cytube/node_modules/express/lib/router/index.js:312:13) [Wed Apr 06 2016 08:21:36] Error: EMFILE, too many open files at [object Object]. (/home/cytube/cytube/src/counters.js:39:42) at [object Object].wrapper as _onTimeout at Timer.listOnTimeout as ontimeout
A solution would be to queue io operations once they start reaching the limit.
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/calzoneman/sync/issues/571
Your stack trace points to the jade cache; can you confirm that this only happens right after a restart and the cache is not primed yet? If this can happen at any time, you should check whether you're starting the server with DEBUG=1
or DEBUG=true
, or setting debug: true
in your config.yaml. Any of these options being set will bypass the cache and reload the jade template from disk on every request, which is horribly inefficient and only intended for live reloading during development/debugging.
The most reasonable thing to do here I think would be to catch this exception (to prevent it from raising an uncaught exception as it does now) and drop the request. Queueing is just a band aid for the real problem of "you ran out of file descriptors".
The real solution if you're getting EMFILE
errors is to increase the file descriptor limit for the user running the daemon. Most of the file descriptors being held open by the process are socket.io websockets and polling requests anyways, so queueing file I/O isn't really going to solve much in the context of CyTube. You can increase the fd limit by changing sysctl.conf
Server Problem
Please confirm whether you've tried the following debugging steps:
npm run build-server
to regeneratelib/
fromsrc/
rm -rf node_modules && npm install
to get a fresh install of dependenciesDescription of the Problem
System Information
A solution would be to queue io operations once they start reaching the limit.