goldbergyoni / nodebestpractices

:white_check_mark: The Node.js best practices list (July 2024)
https://twitter.com/nodepractices/
Creative Commons Attribution Share Alike 4.0 International
99.56k stars 10.11k forks source link

Performance & diagnostics best practices - call for ideas #256

Closed goldbergyoni closed 5 years ago

goldbergyoni commented 6 years ago

We're kicking off the performance best practices section. Any idea regarding performance best practices or great bibliography to extract ideas from (youtube, blog post, etc)?

This issue will summarize all the ideas and bibliography and serve as a TOC for the writers. This is also a call for write or would like to learn by practicing, discussing and writing.

@sagirk @BrunoScheufler @js-kyle

goldbergyoni commented 6 years ago

Title: Monitor first customer-facing metrics Gist: Going by Google's SRE book, monitoring should focus on metrics that immediately impact the customer, practically the golden signals: API Latency, Traffic, Errors, and Saturation. Also relevant are the RED framework and the USE method

Anything else you monitor (e.g. event loop) is a cause and not a symptom. Quote from My Philosophy on Alerting:

"But," you might say, "I know database servers that are unreachable results in user data unavailability." That's fine. Alert on the data unavailability. Alert on the symptom: the 500, the Oops!, the whitebox metric that indicates that not all servers were reached from the database's client. Why?

Examples: we will show how to acheive this in Node.js

profnandaa commented 6 years ago

Perhaps debugging can be part of it? This is a good place - https://www.joyent.com/node-js/production/debug

js-kyle commented 6 years ago

The express documentation has good tips: https://expressjs.com/en/advanced/best-practice-performance.html

We may want to review the current production practices section, as they may overlap or be more appropriate as part of the new section, for example the bullet around serving assets from a CDN rather than Node.js

sagirk commented 6 years ago

Choose the classical for loop over forEach/ES6 of when dealing with huge amounts of data (e.g., 10/100 million+).

Reasons why: https://medium.com/tech-tajawal/loops-performances-in-node-js-9fbccf2d6aa6

https://stackoverflow.com/questions/500504/why-is-using-for-in-with-array-iteration-a-bad-idea

https://stackoverflow.com/questions/43031988/javascript-efficiency-for-vs-foreach/43032526#43032526

https://stackoverflow.com/questions/3520688/javascript-loop-performance-why-is-to-decrement-the-iterator-toward-0-faster-t

sagirk commented 6 years ago

Use a version of node that ships with new TurboFan JIT compiler rather than just the older Crankshaft compiler.

Reasons why: https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

AbdelrahmanHafez commented 6 years ago

Deal with database and external APIs in batches, meaning that a developer should favor, and try to fetch a 100 entities using a single HTTP request, instead of a 100 HTTP requests with a single document each.

Same goes for database operations, writing and fetching data are faster when done in batch rather than multiple operations.

goldbergyoni commented 6 years ago

Copy pasting from the reddit discussion:

"I usually write to log/monitor on DB query start and query end so I can later identify avg query time and identify the slowest queries. Using Sequelize you can pass the flag {benchmark:true} and get the query times logged. Later you can bake/export to your monitoring system and create metrics based on it"

goldbergyoni commented 6 years ago

Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime (a.k.a POLYMORPHIC VS MONOMORPHIC FUNCTIONS)

https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de

VinayaSathyanarayana commented 6 years ago
goldbergyoni commented 6 years ago

When evaluating alternatives and need to measure performance, use benchmark tooling like auto-canon and benchmark js which can provide more precise results (microtime, run many iterations) and better benchmarking performance (e.g. issue more call per second)

TheHollidayInn commented 6 years ago

Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus

TheHollidayInn commented 5 years ago

A few others:

References:

barretojs commented 5 years ago

i think a good performance practice is to use a load balancer for request and a load balancer to distribute the node.js app on the multiple cores of your cpu(because as we know node is single threaded). the latter can be achieved EXTREMELY easily using pm2, setting one flag that does the multi core load balancing automatically. besides, pm2 can make really easy to monitor the memory and cpu usage of your app, and it has a lot of other amazing features.

http://pm2.keymetrics.io/

https://medium.com/iquii/good-practices-for-high-performance-and-scalable-node-js-applications-part-1-3-bb06b6204197

goldbergyoni commented 5 years ago

@TheHollidayInn Great list!

Few remarks/questions:

  1. Use gzip compression & serve from a CDN - that is more frontend related tips, are we covering frontend as well? never dealt with questions, I don't know and tend to think that - No. What does @sagirk @BrunoScheufler and @js-kyle think?
  2. HTTP2 - do we have evidence/reference that http2 reduces the load on the server?
goldbergyoni commented 5 years ago

@barretojs First, welcome on board, good to have you here. I'm not sure about this advice as PM2 is using the cluster module under the hood (I think so at least, does it?) which seems to be slower than *real router like nginx and iptables

https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272

What do you think?

TheHollidayInn commented 5 years ago

@i0natan Np!

  1. For gzip and CDN, I included this because they are decisions you'd make while building out web apps (like using Express). If you serve static content with express, using gzip will be helpful. Also, it might be good to just note the benefits of using a CDN rather than serving your own static content.
  2. I'm not sure load is improved. Although Multiplexing may help. Here is a list of benefits: https://webapplog.com/http2-node. To me, it seems more helpful for front end, but I'll admit I'm still new to http2.
barretojs commented 5 years ago

@i0natan you're right. i wasn't aware of iptables' capability, and the numbers are impressive. i think there's no doubt that for pure performance it distributes the load way better. thanks for the great in-depth article!

goldbergyoni commented 5 years ago

/remind me in 2 days

reminders[bot] commented 5 years ago

@i0natan set a reminder for Oct 9th 2018

js-kyle commented 5 years ago

I'd definitely +1 using the CDN for static assets - we have this as a production best practice at the moment. I think the performance section and production sections are going to have a lot of potential crossover so it's a good time to clear up the differences

Again, with gzip & other CPU intensive tasks, we actually recommend that they are better handled outside of Node for performance reasons. So I guess it's whether we want to adjust our view on this, or, as part of this section recommend to not use Node. Thoughts? See current bullets 5.3 & 5.11

reminders[bot] commented 5 years ago

:wave: @i0natan,

goldbergyoni commented 5 years ago

Adding one more which exists in our production best practices: set NODE_ENV=production if rendering on the server as it utilizes the templating engine cache

goldbergyoni commented 5 years ago

Also highly recommends this video to be included in our bibliography: https://www.youtube.com/watch?v=K8spO4hHMhg&vl=en

VinayaSathyanarayana commented 5 years ago

We should add a note on not blocking the Event loop (https://nodejs.org/en/docs/guides/dont-block-the-event-loop/)

js-kyle commented 5 years ago

@VinayaSathyanarayana Great resource!

js-kyle commented 5 years ago

This is roughly what our current suggestions are looking like in a list

Monitor customer-facing metrics Use text compression Set NODE_ENV to production Avoid synchronous functions Run Node.js in a cluster Choose the classical for loop over forEach/ES6 of when dealing with huge amounts of data Use a version of node that ships with new TurboFan JIT compiler Deal with database and external APIs in batches Benchmark slow db calls Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime Analyze repeated API Calls/Database Queries to cache them Debounce and Throttle Benchmarking with load testing tools Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus Serve static assets from a CDN Use HTTP/2 Don't block the event loop

goldbergyoni commented 5 years ago

We may cooperate or grab ideas from: https://github.com/nodejs/diagnostics/issues/211

TheHollidayInn commented 5 years ago

Is it okay to submit PRs for drafts of these?

js-kyle commented 5 years ago

@TheHollidayInn Yes!

We've just had some discussions about this, and have decided to change the approach for developing this section. In summary:

Do you have a bullet from the list above which you have a preference to work on first? If so, I can create the first issue based on that one so we have a good model to work on for the rest of the list.

Look forward to seeing your content, as always feel free to check in with a PR early on to check your approach if needed

TheHollidayInn commented 5 years ago

Great! I had some copy written up as a quick draft for using Compression. I went ahead an opened a PR here: https://github.com/i0natan/nodebestpractices/pull/288. I'm happy to change anything to fit your new flow.

Berkmann18 commented 5 years ago

Not sure if it was mentioned before but here's other (perhaps not NodeJS centric) things that I think should be added:

mohemos commented 5 years ago

For MySQL Users:

goldbergyoni commented 5 years ago

@Berkmann18 Great set of advice. And welcome aboard.

Why removing orphaned dependencies (tree shaking) improves performance? what is "Memoising"?

@mohemos Welcome & Thank you. Adding some of the advice to the pool #302

Berkmann18 commented 5 years ago

@Berkmann18 Thank you. Tree shaking improves performance because it will only include code that is used instead of importing/requiring every exported functions/data structure from modules/libraries/frameworks which will make the resulting code lighter and easier to transfer and parse. Here's another explanation of what tree shaking is.

Example: Importing everything but using only add;

//NodeJS
const maths = require('maths');

//ES
import * from 'maths';

Only importing add.

//NodeJS
const { add } = require('maths');
//Or
const add = require('maths').add;

//ES
import { add } from 'maths';
import add from 'maths/add';

Memoising is where a code does something then instead of doing it again, which might waste some resources to do the exact same thing as done before, it simply gets what was done and stored in a variable.

Example: Without memoising:

const fx = (x, y) => {
  console.log('The result is', doALongCalculation(x, y));
  return doALongCalculation(x, y);
};

With memoising:

const fx = (x, y) => {
  const result = doALongCalculation(x, y);
  console.log('The result is', result);
  return result;
};

So it's basically where the code will memorise some data that will be stored in a variable instead of re-doing the computation/fetching process again and again.

goldbergyoni commented 5 years ago

@Berkmann18 Thanks for the comprehensive and great explanation.

Memoising sounds like a good fit for the General bullet - a bullet which holds all generic practices that are not node.js related

About tree shaking - I'm familiar with it, for frontend the benefit is obvious, but why would it improve backend performance? during run-time anyway all the files are parsed (this is how 'require' work), do you mean to include also re-bundling of the code and pack in less files? I'm looking for the exact part that will boost performance

Berkmann18 commented 5 years ago

@i0natan You're welcome :smile: . Tree shaking will prevent servers from fetching things it doesn't need which enable it to get to the main code faster as well as not having to go through more resources than it needs to run. It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit). When it comes to bundling, it will allow the bundling tool to include less stuff which would encourage for a smaller, more optimised bundle.

VinayaSathyanarayana commented 5 years ago

We should add throttling to the list

VinayaSathyanarayana commented 5 years ago

We should add "Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop

goldbergyoni commented 5 years ago

@Berkmann18

It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit).

Can you provide an example, maybe even using code, and some data/benchmark that shows the impact? just want to ensure we address popular issues that leave a serious performance foortprint

goldbergyoni commented 5 years ago

@VinayaSathyanarayana Thanks for joining the brainstorm :]

"Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop

Can you provide (pseudo) code example? are we concerned because of performance or the code readability/errors?

Berkmann18 commented 5 years ago

@i0natan Sure, I've made a repo with benchmarks using only math as an example.

TheHollidayInn commented 5 years ago

@AbdelrahmanHafez For your proposed idea Deal with database and external APIs in batches, do you have any links or references for this?

migramcastillo commented 5 years ago
* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().

goldbergyoni commented 5 years ago

@migramcastillo Makes a lot of sense. Do you want to write that bullet?

goldbergyoni commented 5 years ago

@js-kyle Maybe add this idea to our TOC issue?

migramcastillo commented 5 years ago

@migramcastillo Makes a lot of sense. Do you want to write that bullet?

I'd be glad to, in which branch should I make the PR?

sagirk commented 5 years ago

I'd be glad to, in which branch should I make the PR?

@migramcastillo If you meant to ask what branch to raise your PR against, that would be master. If you meant to ask what branch to make your changes in, feel free to create a new branch in your fork.

stale[bot] commented 5 years ago

Hello there! 👋 This issue has gone silent. Eerily silent. ⏳ We currently close issues after 100 days of inactivity. It has been 90 days since the last update here. If needed, you can keep it open by replying here. Thanks for being a part of the Node.js Best Practices community! 💚

abhishekshetty commented 4 years ago
* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()

Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().

Although Promise.all is better than sequential non-dependent promises in most of the cases, It works best only when the number of calls/promises are known before hand or atleast few. In case of dynamic number of promises, e.g. batching in runtime and if the number of calls increases significantly it might endup exhausting all the connection pool or create deadlocks or if it http calls then spam the service. sample stackoverflow link https://stackoverflow.com/questions/53964228/how-do-i-perform-a-large-batch-of-promises

abhishekshetty commented 4 years ago
  1. Maybe have a section for in-memory cache and best practices around implementing and maintaining cache. There are few packages that supports this functionality. Also it seems there is no shared in-memory cache implementation possible in cluster mode.

  2. Would like to highlight a recent video released by chrome team https://www.youtube.com/watch?v=ff4fgQxPaO0, that talks about how to have faster apps when dealing with large object literals.