Closed MylesBorins closed 7 years ago
Is this version of Node built from sources of vee-eight-last-lkgr
branch from v8/node
repo?
Can ~Node.js~ a test suite be considered as a real life code?
@vsemozhetbyt that wasn't the core test suite. That was a test suite for one of my services at help.com. :]
@chicoxyzzy the build above is indeed vee-eight-last-lkgr
with a single commit to stub out some docs stuff breaking our build infra
Personally I'd lean toward being prepared for 6.0, but I know a lot of companies that can be very strict about predictable performance. I imagine there'd be a few companies that'd avoid the 8.x line until after TF+I lands so they can do a proper performance audit and have a more clear idea what they can expect.
I'm +1 on option 3.
An example of a real life case. Run ESLint (v4.0.0-alpha.0) check on Node.js code base.
Test script:
'use strict';
const execSync = require('child_process').execSync;
const command =
'eslint' +
' --rule "indent: 0, no-multi-spaces: 0, space-before-function-paren: 0"' +
' --rulesdir tools/eslint-rules/' +
' benchmark lib test';
console.time('eslint1');
try { execSync(command, { encoding: 'utf8' }); }
catch (err) { console.error(err.stdout); }
console.timeEnd('eslint1');
console.time('eslint2');
try { execSync(command, { encoding: 'utf8' }); }
catch (err) { console.error(err.stdout); }
console.timeEnd('eslint2');
console.time('eslint3');
try { execSync(command, { encoding: 'utf8' }); }
catch (err) { console.error(err.stdout); }
console.timeEnd('eslint3');
Results:
// Node.js 8.0.0-nightly20170411b8f416023d (v8 5.7.492.69)
eslint1: 25632.138ms
eslint2: 24990.877ms
eslint3: 24390.442ms
// Node.js 8.0.0-test201704119b43f9c487 (v8 5.9.0 candidate)
eslint1: 25644.981ms
eslint2: 25228.094ms
eslint3: 24891.385ms
Would running CITGM with the new v8 produce any meaningful results?
Another real world benchmark suite: https://github.com/eggjs/benchmark (benchmark of an enterprise Web framework egg.js), this includes real-world stuff like security & authentication and needs minimal external dependencies to run (no DB, wrk is required though, simply npm install && npm test
and it will start running) . It also uses generators & async/await heavily so the new pipeline would make a huge difference.
CITGM run (V8 5.9.213): https://ci.nodejs.org/view/Node.js-citgm/job/citgm-smoker/711/
Results from egg's benchmark, the test build defintely outperforms the nightly
------- koa hello ------- Running 10s test @ http://127.0.0.1:7002/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.61ms 8.84ms 236.41ms 98.30% Req/Sec 1.24k 229.45 1.62k 71.39% 98433 requests in 10.01s, 14.83MB read Requests/sec: 9832.27 Transfer/sec: 1.48MB ------- toa hello ------- Running 10s test @ http://127.0.0.1:7003/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.63ms 5.20ms 92.20ms 89.29% Req/Sec 1.30k 161.83 1.66k 76.75% 103731 requests in 10.02s, 17.51MB read Requests/sec: 10348.31 Transfer/sec: 1.75MB ------- egg hello ------- Running 10s test @ http://127.0.0.1:7001/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.02ms 10.37ms 252.42ms 98.64% Req/Sec 736.79 89.38 0.89k 81.64% 58394 requests in 10.01s, 19.60MB read Requests/sec: 5832.80 Transfer/sec: 1.96MB ------- egg hello (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 8.48ms 3.87ms 135.44ms 96.26% Req/Sec 725.35 57.84 1.10k 86.61% 57742 requests in 10.02s, 19.38MB read Requests/sec: 5762.01 Transfer/sec: 1.93MB ------- koa view ------- Running 10s test @ http://127.0.0.1:7002/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 8.67ms 10.26ms 243.42ms 96.96% Req/Sec 782.52 126.78 1.00k 81.28% 62071 requests in 10.02s, 155.15MB read Requests/sec: 6192.35 Transfer/sec: 15.48MB ------- toa view ------- Running 10s test @ http://127.0.0.1:7003/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.41ms 7.93ms 165.57ms 88.63% Req/Sec 725.59 91.40 0.92k 72.81% 57704 requests in 10.03s, 145.28MB read Requests/sec: 5751.99 Transfer/sec: 14.48MB ------- egg nunjucks view ------- Running 10s test @ http://127.0.0.1:7001/nunjucks 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 20.29ms 29.04ms 443.84ms 96.13% Req/Sec 379.83 71.00 575.00 86.62% 29916 requests in 10.09s, 80.57MB read Requests/sec: 2965.36 Transfer/sec: 7.99MB ------- egg ejs view ------- Running 10s test @ http://127.0.0.1:7001/ejs 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 13.74ms 8.63ms 242.56ms 87.65% Req/Sec 453.48 55.75 790.00 79.60% 36095 requests in 10.02s, 97.93MB read Requests/sec: 3602.51 Transfer/sec: 9.77MB ------- egg nunjucks view (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/nunjucks-aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 16.44ms 10.45ms 243.80ms 86.53% Req/Sec 380.81 50.96 770.00 84.84% 30279 requests in 10.02s, 81.72MB read Requests/sec: 3021.16 Transfer/sec: 8.15MB ------- egg ejs view (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/ejs-aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 13.69ms 8.50ms 197.92ms 87.97% Req/Sec 456.37 58.07 0.87k 77.82% 36312 requests in 10.02s, 98.73MB read Requests/sec: 3625.08 Transfer/sec: 9.86MB ------- egg passport ------- Running 10s test @ http://127.0.0.1:7001/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.94ms 12.80ms 290.00ms 98.30% Req/Sec 685.20 110.37 1.32k 81.38% 54283 requests in 10.03s, 18.12MB read Requests/sec: 5414.16 Transfer/sec: 1.81MB ------- egg passport (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 8.03ms 2.86ms 108.64ms 94.12% Req/Sec 759.24 168.18 5.27k 99.13% 60554 requests in 10.10s, 20.44MB read Requests/sec: 5996.82 Transfer/sec: 2.02MB
------- koa hello ------- Running 10s test @ http://127.0.0.1:7002/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.17ms 4.23ms 139.91ms 95.02% Req/Sec 1.25k 253.08 1.83k 81.48% 99458 requests in 10.02s, 14.99MB read Requests/sec: 9923.77 Transfer/sec: 1.50MB ------- toa hello ------- Running 10s test @ http://127.0.0.1:7003/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.89ms 6.32ms 112.00ms 88.49% Req/Sec 1.34k 248.16 1.93k 70.00% 106674 requests in 10.02s, 18.01MB read Requests/sec: 10640.94 Transfer/sec: 1.80MB ------- egg hello ------- Running 10s test @ http://127.0.0.1:7001/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 7.74ms 4.76ms 160.21ms 94.06% Req/Sec 808.03 130.07 1.66k 83.85% 64295 requests in 10.03s, 21.58MB read Requests/sec: 6409.31 Transfer/sec: 2.15MB ------- egg hello (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 7.37ms 4.70ms 159.82ms 95.19% Req/Sec 844.56 66.80 1.44k 92.24% 67233 requests in 10.02s, 22.57MB read Requests/sec: 6710.81 Transfer/sec: 2.25MBiew example ------- koa view ------- Running 10s test @ http://127.0.0.1:7002/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.18ms 7.66ms 182.34ms 88.82% Req/Sec 727.27 153.48 1.14k 67.67% 57838 requests in 10.02s, 144.57MB read Requests/sec: 5772.13 Transfer/sec: 14.43MB ------- toa view ------- Running 10s test @ http://127.0.0.1:7003/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.64ms 9.29ms 171.58ms 87.36% Req/Sec 756.45 189.22 1.39k 69.71% 60239 requests in 10.02s, 151.66MB read Requests/sec: 6009.90 Transfer/sec: 15.13MB ------- egg nunjucks view ------- Running 10s test @ http://127.0.0.1:7001/nunjucks 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 17.02ms 13.60ms 282.96ms 87.93% Req/Sec 381.47 69.48 553.00 75.75% 30276 requests in 10.02s, 81.54MB read Requests/sec: 3020.98 Transfer/sec: 8.14MB ------- egg ejs view ------- Running 10s test @ http://127.0.0.1:7001/ejs 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 12.61ms 7.43ms 187.54ms 86.77% Req/Sec 496.42 61.09 1.04k 80.08% 39494 requests in 10.02s, 107.16MB read Requests/sec: 3941.07 Transfer/sec: 10.69MB ------- egg nunjucks view (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/nunjucks-aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 16.16ms 10.62ms 240.12ms 85.22% Req/Sec 393.36 60.61 1.09k 83.98% 31315 requests in 10.02s, 84.52MB read Requests/sec: 3124.77 Transfer/sec: 8.43MB ------- egg ejs view (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/ejs-aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 12.56ms 7.65ms 187.92ms 86.36% Req/Sec 499.39 102.21 2.78k 92.37% 39756 requests in 10.09s, 108.09MB read Requests/sec: 3939.64 Transfer/sec: 10.71MB ------- egg passport ------- Running 10s test @ http://127.0.0.1:7001/ 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 9.09ms 7.55ms 204.95ms 94.42% Req/Sec 710.58 190.52 0.99k 69.55% 56504 requests in 10.02s, 18.86MB read Requests/sec: 5639.99 Transfer/sec: 1.88MB ------- egg passport (Async Await) ------- Running 10s test @ http://127.0.0.1:7001/aa 8 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 6.83ms 3.24ms 127.36ms 95.58% Req/Sec 0.90k 75.49 1.38k 91.75% 71777 requests in 10.02s, 24.23MB read Requests/sec: 7160.32 Transfer/sec: 2.42MB
(Trying to write a script to output structured data with it..UPDATE: PR opened in https://github.com/eggjs/benchmark/pull/6)
Sitting here with @mcollina running a number of module benchmarks are we're not yet seeing a significant performance delta between 8-with-5.9 and master or 6.10.x. yet. We will be running a number of additional benchmarks over the coming couple of days to prove that out.
If there is not going to be a significant difference between the two, then I'm seeing less of a need to delay the release for 5.9 but I'm also not seeing a significant technical reason not to. It's a bit up in the air, I think. We'll definitely need to run additional tests but if the performance profile is not significantly different, then it should actually be ok to ship 8.0.0 with 5.7 or 5.8 on time this month (option 1) and move up to 5.9 later in 8.x assuming the ABI compatibility is preserved without any issue.
It may also be nice to see some typical front-end workflows comparison
time npm i
ABI compatibility can be preserved if this patch is applied on top of the upgrade to V8 5.8. I'm still waiting for CITGM to verify that though :)
@joyeecheung Awesome, thanks for doing the measurement.
@fhinkel do you think we should investigate whether it makes sense to turn eslint and eggjs into benchmarks similar to AcmeAir?
/cc @ofrobots for benchmark question.
@vsemozhetbyt
An example of a real life case. Run ESLint (v4.0.0-alpha.0) check on Node.js code base.
Just to make sure: you've put the tested Node binary to PATH, right? Because if not, you might actually have been testing your system Node both times and it might have only been the launcher script that you've run with Node 8.
FWIW, a similar (but not equivalent) measurement:
➜ node git:(master) ✗ node -e 'console.log(process.versions.v8)'
5.5.372.43
➜ node git:(master) ✗ node-master -e 'console.log(process.versions.v8)'
5.7.492.69
➜ node git:(master) ✗ node-canary -e 'console.log(process.versions.v8)'
5.9.203
➜ node git:(master) ✗ time node tools/eslint/bin/eslint.js --rulesdir tools/eslint-rules benchmark lib test
node tools/eslint/bin/eslint.js --rulesdir tools/eslint-rules benchmark lib 19.08s user 0.57s system 114% cpu 17.226 total
➜ node git:(master) ✗ time node-master tools/eslint/bin/eslint.js --rulesdir tools/eslint-rules benchmark lib test
node-master tools/eslint/bin/eslint.js --rulesdir tools/eslint-rules benchmar 12.15s user 0.28s system 107% cpu 11.591 total
➜ node git:(master) ✗ time node-canary tools/eslint/bin/eslint.js --rulesdir tools/eslint-rules benchmark lib test
node-canary tools/eslint/bin/eslint.js --rulesdir tools/eslint-rules benchmar 12.61s user 0.25s system 114% cpu 11.208 total
I think am +1 on 3. If delaying the release of Node 8 is not feasible, then at least option 2 for me.
IMO we should be treating Node 8 the same way we treat any other "Current" branch (until it becomes LTS of course). I feel like going with option 1 would make using Node 8 a lot less appealing come October. We have had semver-minor upgrades of V8 in the last two major versions (v7, v6) and I think we should continue to try to track the latest version of V8 possible (wasn't that part of the reason for iojs in the first place?). We would have 5-6 months to get things up to par, but from my current testing (limited of course, but still real applications), real world performance has actually improved quite a lot with V8 5.8 and 5.9.
@aqrln I've considered this code in the eslint.cmd
:
@IF EXIST "%~dp0\node.exe" (
"%~dp0\node.exe" "%~dp0\node_modules\eslint\bin\eslint.js" %*
) ELSE (
@SETLOCAL
@SET PATHEXT=%PATHEXT:;.JS;=;%
node "%~dp0\node_modules\eslint\bin\eslint.js" %*
)
So I just put various Node.js versions in the global module folder. I've checked this approach with process manager to be sure I tested the right version.
Here are some numbers for building a react/redux application with webpack (NODE_ENV=production, babel-loader, css-loader, url-loader, babili-webpack-plugin, code splitting, source maps...)
node-current (V8 5.5.372): 57,85s user 0,85s system 102% cpu 57,042 total
node-master (V8 5.7.492): 55,72s user 0,80s system 104% cpu 53,855 total
node-canary (V8 5.9.213): 51,09s user 0,81s system 108% cpu 47,628 total
@jasnell I the need for a delay has nothing to do with perf in 5.9... it was about being able to have api / abi compat up to 6.0... due to churn between 5.9 and 6.0
Yeah, I know there's more to it than just the performance and I still want to see what the consensus direction is. If we push off a month, then so be it :-) My goal, btw, is to get another test build later on today but it's going to depend on availability and reliability of the wifi connection from my hotel.
I'm thinking that people using the LTS releases, generally favor stability over new features/performance improvements. I also see turbofan + ignition being on by default as a relatively larger risk to stability versus other V8 releases that we have pulled in during past releases. For these reasons I'd be leaning towards option 1.
I'd like to chime in for a second to share my own small, non-scientific point of view -- apologies if this is out-of-order.
As someone who writes libraries for Node.js, I would like to see the new pipeline regardless of current performance issues. A longstanding pain-point with Node.js has been V8's legacy compiler tree effectively disallowing certain language features in performance-critical areas, which to me has meant extra mental overhead and stress to make sure I'm playing nice with the JIT, even in area that aren't necessarily high stress.
I would much prefer to work in an environment that doesn't cause me to worry (at least reflexively) if I'm using a technique that's going to arbitrarily slow down or otherwise negatively impact my code, where I can use features in the language that fit my mental model for what lexically makes sense in the code block I'm working with.
Thanks for all your effort with the project!
I'll be doing the benchmark of my real life app tomorrow or the day after. As we work only with docker I forked the docker-node and made dockerfiles for the v8.0.0-test201704119b43f9c487 I hope this help other to make their benchmarks
Dockerfiles repo https://github.com/sirgallifrey/docker-node-test
Images on dockerhub https://hub.docker.com/r/sirgallifrey/node-test/tags/
I would love to see a performance guide for collaborators regarding this migration - I think it will cause a lot of confusion with regards to what's slow and what's fast and we all know how hard measuring the sort of performance changes is very hard.
👍 for option 2 or 3. It seems to be the ones that enable a better migration to new JavaScript features w/o completely forsaking performance. Being tied to V8 <=5.8 for the next 1.5 years (as an LTS user) doesn't seem great.
security and maintainability are priorities for me, minor changes in benchmarks aren't my top-of-mind. thinking forward to 6.0 is the way to go, and quick reaction to security issues are a must when using Node as the base for services. so my choice would be 3
I support Option 1.
Option 2 and 3 both involving swapping out the v8 version on a major release line.
I understood one of the guarantees of a major was supposed to be that it had a stable v8, wasn't it? And because keeping v8 stable holds back new features, we also release new majors twice yearly, so we offer stability and new ES features, people can choose.
We should particularly not swap out v8 on a major just before it gets moved to LTS, how can we recommend LTS be used in production if it has no history of use? There are good reasons we leave a major in active use for several months before it becomes LTS.
In that case, 8.x shouldn't be called LTS, at least not until whatever version of v8 gets swapped into it has a chance to prove itself, which would mean delaying when 8.x goes LTS. Or, we could extend the 6.x lifetime, and make 9.x be the next LTS.
Basically, I think swapping v8s in LTS is a change of such magnitude that it means a rethink of what LTS means, and what our LTS schedule is. I'm not wedded to our LTS schedule, I think it could change, but I don't think going from v8 5.8 to 6.0 is just another minor.
I support Option 1.
@sam-github kinda sounds like you support Option 1 or Option 3++ (delay much longer) 😁
I like the whooshing sound deadlines make when they fly by so I'm perfectly fine with option 3, delaying for a month.
Option 1 sounds unworkable long-term unless we have someone working dedicated on back-porting patches. (We don't.)
That said:
6.0 API / ABI should be relatively stable between May 17 - 25
What are the expected changes between 5.9 and 6.0? We could do a 5.8 -> 5.9 -> 6.0 triple upgrade combo if they are trivial. (I'm dubbing that option 2.5.)
What are the expected changes between 5.9 and 6.0? We could do a 5.8 -> 5.9 -> 6.0 triple upgrade combo if they are trivial. (I'm dubbing that option 2.5.)
Not all of V8 team is focused on Node.js, and there is a lot of work going on between V8 and Chrome as well, so that we cannot guarantee ABI stability from 5.9 to 6.0. That being said, I don't expect major changes to the API.
Once 6.0 goes into API freeze around second half of May, back porting changes to make 5.8 and 5.9 ABI compatible with 6.0 would be fairly mechanical, as has been done here. However, that only makes sense if the release date of Node 8 can be pushed back to the second half of May so that we can make these back ports to 5.8 first.
@zackschuster
As someone who writes libraries for Node.js, I would like to see the new pipeline regardless of current performance issues. A longstanding pain-point with Node.js has been V8's legacy compiler tree effectively disallowing certain language features in performance-critical areas, which to me has meant extra mental overhead and stress to make sure I'm playing nice with the JIT, even in area that aren't necessarily high stress.
Note we're only replacing one set of "disallowed" language features we're familiar with with another set we're not yet familiar with. We have no reason to believe TF will optimize everything and from the short time playing with it there definitely are pitfalls to run into.
In that case, 8.x shouldn't be called LTS, at least not until whatever version of v8 gets swapped into it has a chance to prove itself, which would mean delaying when 8.x goes LTS.
@sam-github I may be misunderstanding the comment above, but I don't think it's quite right. 8.x will not be LTS until October. (6.0.0 was not LTS. 6.9.0 was released.) So if we update to V8 60 in late May, that still gives us months of testing before the first LTS version of the 8.x release line.
@sam-github it is also worth mentioning there is prior art on this. For 6.x we ran the beta / r.c. with a beta version of V8 5.0, rolling to stable right before the release. We then updated to 5.1 before LTS. I for one am really glad we did so.
The biggest difference here is we would be preemptively setting up forward compatible abi / api compat
We have no reason to believe TF will optimize everything
I actually thought quite the opposite. The claim was that there even if peak performance might (but don't have to) be lower, better baseline should give developers more confidence when using ES6 features as well as "toxic" ES5 ones like try/catch/finally etc.
and from the short time playing with it there definitely are pitfalls to run into.
Which I guess should be reported
@benjamingr Thanks for the reply! True, there are lots of unknowns and potential for broken promises, and I would never suggest my opinion on its own is a rock-solid basis for a decision of this magnitude. This is merely in the interest of providing some (admittedly minute) community feedback 😄
Option 3 seems to be the 'sanest' from a maintenance point of view
So if we update to V8 60 in late May, that still gives us months of testing before the first LTS version of the 8.x release line.
If we want to sync node releases with v8, then why don't we do that? Release 8.x right after 6.0 is released? Why did we choose twice a year at arbitrary times if V8 alignment is so important?
I'm not going to rail forever against switching v8 in the midst of a major. I'm kindof curious about what kind of commotion an entirely new opt pipeline will cause. Maybe it will be no big deal, and we can be more free with this in the future. Maybe it will be a low point in node.js stability. I've no crystal ball.
@sam-github afaik the release schedule we had was designed around what was preferable for enterprises and shops that usually work around LTS releases. @mhdawson can speak to this.
If we want to wait until 6.0 is released we will be delaying until around August 1st... which is not tenable imho.
The current release schedule was selected after many conversations with stakeholders throughout the ecosystem based on what makes the most sense for adopters. It was understood at the time that the V8 release schedule mismatch would cause some headaches and nothing has changed in that regard. We knew what we were signing up for and we knew that discussions like this would come up. At this point I am +1 on Option 3 but I am calling for an official @nodejs/ctc vote to settle the matter.
@nodejs/ctc members, please weigh in.
Abstain.
I have an opinion, but I defer to those who do releases and/or otherwise will be most impacted by the decision here.
I'll also note that if we don't have an option that gets a majority of non-abstaining CTC members to endorse it, we'll need a run-off vote to achieve that. Our governance rules require a majority of non-abstaining CTC members to get a decision (in voting situations).
Tallying CTC votes.
Option | Votes | Who |
---|---|---|
Option 1 | 1 | @mhdawson |
Option 2 | 1 | @Fishrock123 |
Option 3 | 12 | @jasnell @addaleax @evanlucas @bnoordhuis @MylesBorins @ChALkeR @fhinkel @cjihrig @indutny @misterdjules @ofrobots @shigeki |
Abstain | 2 | @Trott @thefourtheye |
No Vote Registered Yet | 4 | @mscdex @rvagg @targos @trevnorris |
Votes Require To Be Accepted | 10 | n/a |
I vote for Option 3
I vote for option 3
Please feel free to edit the original comment to tally votes
On Apr 14, 2017 7:03 PM, "Evan Lucas" notifications@github.com wrote:
I vote for Option 3
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nodejs/CTC/issues/99#issuecomment-294252739, or mute the thread https://github.com/notifications/unsubscribe-auth/AAecV2WLSz3e6563RgHdDr6TMTuneARcks5rv_tNgaJpZM4M6u7I .
Please feel free to edit the original comment to tally votes
Added a link at the bottom of the original comment to the comment with the vote tally.
Ok, my votes for each specific proposal would be:
If one option has to be picked, I vote for option 3.
My votes:
If one option has to be picked, I vote for option 3.
EDIT: CTC voting tally for this issue
Problem
V8 5.9 will be the first version with TurboFan + Ignition (TF+I) turned on by default. As parts of the Node.js codebase have been tuned to CrankShaft, there will be a non trivial amount of churn to adapt to the new pipeline. This also creates a security risk as CrankShaft and FullCodeGen are no longer maintained by the V8 team or tested by the Chrome security team.
If TF + I lands in Node.js 9.x backporting any changes to Node.js 8.x is going to prove extremely difficult and time consuming.
Below are three proposals of how we can approach this problem. To anyone in @nodejs/collaborators, and the community at large, we would love to hear your opinions on this. Further we really want to get some real world benchmarks, so if you have a way of testing builds to get non micro benchmarks please chime in and I'll get you binaries to work with.
Three Proposals:
1) Target 5.7 or 5.8 in 8.x release with standard APi / ABI:
Do not change anything.
Pros:
Cons:
Backports will require different performance profiling due to different pipeline
2) Target 5.8 in 8.x release with forward compatible ABI to 5.9, upgrade to TF+I as semver minor:
Land 5.8 with 5.9 forward compatible API / ABI in 8.x beta
Upgrade to 5.9 with TF+I when it goes stable in early June.
Backport from upstream to 5.9 as changes land
Pros:
Cons:
Potential for large churn in core in an LTS release to fix performance issues and subtle changes to runtime
3) Target 5.8 in 8.x release. Delay release 3 - 4 weeks to allow forward compatible ABI to 6.0. Upgrade to TF+I as semver minor
Land 5.8 with 6.0 forward compatible API / ABI in 8.x beta
Upgrade to 5.9 with TF+I when it goes stable in early June
Upgrade to 6.0 when it goes stable in early August
Pros:
Cons:
What needs to be done?
Testing
Test build of 8.x including V8 5.9 (TF + I turned on)
Install with nvm
EDIT: CTC voting tally for this issue