Closed goldbergyoni closed 5 years ago
Title: Monitor first customer-facing metrics Gist: Going by Google's SRE book, monitoring should focus on metrics that immediately impact the customer, practically the golden signals: API Latency, Traffic, Errors, and Saturation. Also relevant are the RED framework and the USE method
Anything else you monitor (e.g. event loop) is a cause and not a symptom. Quote from My Philosophy on Alerting:
"But," you might say, "I know database servers that are unreachable results in user data unavailability." That's fine. Alert on the data unavailability. Alert on the symptom: the 500, the Oops!, the whitebox metric that indicates that not all servers were reached from the database's client. Why?
Examples: we will show how to acheive this in Node.js
Perhaps debugging can be part of it? This is a good place - https://www.joyent.com/node-js/production/debug
The express documentation has good tips: https://expressjs.com/en/advanced/best-practice-performance.html
We may want to review the current production practices section, as they may overlap or be more appropriate as part of the new section, for example the bullet around serving assets from a CDN rather than Node.js
Choose the classical for
loop over forEach
/ES6 of
when dealing with huge amounts of data (e.g., 10/100 million+).
Reasons why: https://medium.com/tech-tajawal/loops-performances-in-node-js-9fbccf2d6aa6
https://stackoverflow.com/questions/500504/why-is-using-for-in-with-array-iteration-a-bad-idea
https://stackoverflow.com/questions/43031988/javascript-efficiency-for-vs-foreach/43032526#43032526
Use a version of node that ships with new TurboFan JIT compiler rather than just the older Crankshaft compiler.
Reasons why: https://medium.com/the-node-js-collection/get-ready-a-new-v8-is-coming-node-js-performance-is-changing-46a63d6da4de
Deal with database and external APIs in batches, meaning that a developer should favor, and try to fetch a 100 entities using a single HTTP request, instead of a 100 HTTP requests with a single document each.
Same goes for database operations, writing and fetching data are faster when done in batch rather than multiple operations.
Copy pasting from the reddit discussion:
"I usually write to log/monitor on DB query start and query end so I can later identify avg query time and identify the slowest queries. Using Sequelize you can pass the flag {benchmark:true} and get the query times logged. Later you can bake/export to your monitoring system and create metrics based on it"
Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime (a.k.a POLYMORPHIC VS MONOMORPHIC FUNCTIONS)
When evaluating alternatives and need to measure performance, use benchmark tooling like auto-canon and benchmark js which can provide more precise results (microtime, run many iterations) and better benchmarking performance (e.g. issue more call per second)
Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus
A few others:
References:
i think a good performance practice is to use a load balancer for request and a load balancer to distribute the node.js app on the multiple cores of your cpu(because as we know node is single threaded). the latter can be achieved EXTREMELY easily using pm2, setting one flag that does the multi core load balancing automatically. besides, pm2 can make really easy to monitor the memory and cpu usage of your app, and it has a lot of other amazing features.
@TheHollidayInn Great list!
Few remarks/questions:
@barretojs First, welcome on board, good to have you here. I'm not sure about this advice as PM2 is using the cluster module under the hood (I think so at least, does it?) which seems to be slower than *real router like nginx and iptables
What do you think?
@i0natan Np!
@i0natan you're right. i wasn't aware of iptables' capability, and the numbers are impressive. i think there's no doubt that for pure performance it distributes the load way better. thanks for the great in-depth article!
/remind me in 2 days
@i0natan set a reminder for Oct 9th 2018
I'd definitely +1 using the CDN for static assets - we have this as a production best practice at the moment. I think the performance section and production sections are going to have a lot of potential crossover so it's a good time to clear up the differences
Again, with gzip & other CPU intensive tasks, we actually recommend that they are better handled outside of Node for performance reasons. So I guess it's whether we want to adjust our view on this, or, as part of this section recommend to not use Node. Thoughts? See current bullets 5.3 & 5.11
:wave: @i0natan,
Adding one more which exists in our production best practices: set NODE_ENV=production if rendering on the server as it utilizes the templating engine cache
Also highly recommends this video to be included in our bibliography: https://www.youtube.com/watch?v=K8spO4hHMhg&vl=en
We should add a note on not blocking the Event loop (https://nodejs.org/en/docs/guides/dont-block-the-event-loop/)
@VinayaSathyanarayana Great resource!
This is roughly what our current suggestions are looking like in a list
Monitor customer-facing metrics Use text compression Set NODE_ENV to production Avoid synchronous functions Run Node.js in a cluster Choose the classical for loop over forEach/ES6 of when dealing with huge amounts of data Use a version of node that ships with new TurboFan JIT compiler Deal with database and external APIs in batches Benchmark slow db calls Use factories or constructors to ensure objects are created using the same schema so the v8 won't have to generate hidden classes on runtime Analyze repeated API Calls/Database Queries to cache them Debounce and Throttle Benchmarking with load testing tools Analyzing and monitoring cpu bottlenecks or memory leaks with Node Clinic/Prometheus Serve static assets from a CDN Use HTTP/2 Don't block the event loop
We may cooperate or grab ideas from: https://github.com/nodejs/diagnostics/issues/211
Is it okay to submit PRs for drafts of these?
@TheHollidayInn Yes!
We've just had some discussions about this, and have decided to change the approach for developing this section. In summary:
Do you have a bullet from the list above which you have a preference to work on first? If so, I can create the first issue based on that one so we have a good model to work on for the rest of the list.
Look forward to seeing your content, as always feel free to check in with a PR early on to check your approach if needed
Great! I had some copy written up as a quick draft for using Compression. I went ahead an opened a PR here: https://github.com/i0natan/nodebestpractices/pull/288. I'm happy to change anything to fit your new flow.
Not sure if it was mentioned before but here's other (perhaps not NodeJS centric) things that I think should be added:
lodash
/underscore
when native approaches outweigh the use of such libraries (a whole list of examples can be found here)For MySQL Users:
@Berkmann18 Great set of advice. And welcome aboard.
Why removing orphaned dependencies (tree shaking) improves performance? what is "Memoising"?
@mohemos Welcome & Thank you. Adding some of the advice to the pool #302
@Berkmann18 Thank you. Tree shaking improves performance because it will only include code that is used instead of importing/requiring every exported functions/data structure from modules/libraries/frameworks which will make the resulting code lighter and easier to transfer and parse. Here's another explanation of what tree shaking is.
Example:
Importing everything but using only add
;
//NodeJS
const maths = require('maths');
//ES
import * from 'maths';
Only importing add
.
//NodeJS
const { add } = require('maths');
//Or
const add = require('maths').add;
//ES
import { add } from 'maths';
import add from 'maths/add';
Memoising is where a code does something then instead of doing it again, which might waste some resources to do the exact same thing as done before, it simply gets what was done and stored in a variable.
Example: Without memoising:
const fx = (x, y) => {
console.log('The result is', doALongCalculation(x, y));
return doALongCalculation(x, y);
};
With memoising:
const fx = (x, y) => {
const result = doALongCalculation(x, y);
console.log('The result is', result);
return result;
};
So it's basically where the code will memorise some data that will be stored in a variable instead of re-doing the computation/fetching process again and again.
@Berkmann18 Thanks for the comprehensive and great explanation.
Memoising sounds like a good fit for the General bullet - a bullet which holds all generic practices that are not node.js related
About tree shaking - I'm familiar with it, for frontend the benefit is obvious, but why would it improve backend performance? during run-time anyway all the files are parsed (this is how 'require' work), do you mean to include also re-bundling of the code and pack in less files? I'm looking for the exact part that will boost performance
@i0natan You're welcome :smile: . Tree shaking will prevent servers from fetching things it doesn't need which enable it to get to the main code faster as well as not having to go through more resources than it needs to run. It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit). When it comes to bundling, it will allow the bundling tool to include less stuff which would encourage for a smaller, more optimised bundle.
We should add throttling to the list
We should add "Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop
@Berkmann18
It might not make much of a difference on static dependencies (aside from the size of the backend codebase) but for dynamic ones, it does (at least a tiny bit).
Can you provide an example, maybe even using code, and some data/benchmark that shows the impact? just want to ensure we address popular issues that leave a serious performance foortprint
@VinayaSathyanarayana Thanks for joining the brainstorm :]
"Avoid Functions in Loops" - Writing functions within loops tends to result in errors due to the way the function creates a closure around the loop
Can you provide (pseudo) code example? are we concerned because of performance or the code readability/errors?
@i0natan Sure, I've made a repo with benchmarks using only math
as an example.
@AbdelrahmanHafez For your proposed idea Deal with database and external APIs in batches
, do you have any links or references for this?
* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()
Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().
@migramcastillo Makes a lot of sense. Do you want to write that bullet?
@js-kyle Maybe add this idea to our TOC issue?
@migramcastillo Makes a lot of sense. Do you want to write that bullet?
I'd be glad to, in which branch should I make the PR?
I'd be glad to, in which branch should I make the PR?
@migramcastillo If you meant to ask what branch to raise your PR against, that would be master. If you meant to ask what branch to make your changes in, feel free to create a new branch in your fork.
Hello there! 👋 This issue has gone silent. Eerily silent. ⏳ We currently close issues after 100 days of inactivity. It has been 90 days since the last update here. If needed, you can keep it open by replying here. Thanks for being a part of the Node.js Best Practices community! 💚
* Promisify your DB Connections and run queries in paralllel via async/await and Promise.all()
Not only for DB purpose, using Promise.all instead of lot of async/await should be a performance best practice, i.e. we need to call 4 API calls correctly with Axios to build our complete response, instead of using await for each one is better to use a single await with Promise.all().
Although Promise.all is better than sequential non-dependent promises in most of the cases, It works best only when the number of calls/promises are known before hand or atleast few. In case of dynamic number of promises, e.g. batching in runtime and if the number of calls increases significantly it might endup exhausting all the connection pool or create deadlocks or if it http calls then spam the service. sample stackoverflow link https://stackoverflow.com/questions/53964228/how-do-i-perform-a-large-batch-of-promises
Maybe have a section for in-memory cache and best practices around implementing and maintaining cache. There are few packages that supports this functionality. Also it seems there is no shared in-memory cache implementation possible in cluster mode.
Would like to highlight a recent video released by chrome team https://www.youtube.com/watch?v=ff4fgQxPaO0, that talks about how to have faster apps when dealing with large object literals.
We're kicking off the performance best practices section. Any idea regarding performance best practices or great bibliography to extract ideas from (youtube, blog post, etc)?
This issue will summarize all the ideas and bibliography and serve as a TOC for the writers. This is also a call for write or would like to learn by practicing, discussing and writing.
@sagirk @BrunoScheufler @js-kyle